From: Frank Buss
Subject: Lisp in hardware
Date: 
Message-ID: <cf2dor$1sf$1@newsreader2.netcologne.de>
I want to implement a processor core in a Xilinx FPGA 
(http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
be possible to do something like Pico Lisp, which is not a normal compiler, 
but more an interpreter:

http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
http://software-lab.de/ref.html

If the processor knows the concept of a list and perhaps has low-level 
commands for CAR and CDR in hardware, it should be really fast.

But first I want to know how it was solved in other systems, like the 
Symbolics Lisp Machine. Where can I find a hardware description of the 
processor? Any other resources I should read?

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

From: Paolo Amoroso
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87hdrfw51i.fsf@plato.moon.paoloamoroso.it>
Frank Buss <··@frank-buss.de> writes:

> But first I want to know how it was solved in other systems, like the 
> Symbolics Lisp Machine. Where can I find a hardware description of the 
> processor? Any other resources I should read?

See Bitkeepers.org and the Minicomputer Orphanage.


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Recommended Common Lisp libraries/tools (Google for info on each):
- ASDF/ASDF-INSTALL: system building/installation
- CL-PPCRE: regular expressions
- UFFI: Foreign Function Interface
From: Pascal Bourguignon
Subject: Re: Lisp in hardware
Date: 
Message-ID: <871xijnp4c.fsf@thalassa.informatimago.com>
Frank Buss <··@frank-buss.de> writes:

> I want to implement a processor core in a Xilinx FPGA 
> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
> be possible to do something like Pico Lisp, which is not a normal compiler, 
> but more an interpreter:
> 
> http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
> http://software-lab.de/ref.html
> 
> If the processor knows the concept of a list and perhaps has low-level 
> commands for CAR and CDR in hardware, it should be really fast.

I don't feel that CAR and CDR be so critical to lisp performance.
Hardware memory management, ie. garbage collection is probably much
more influencing the performance.


> But first I want to know how it was solved in other systems, like the 
> Symbolics Lisp Machine. Where can I find a hardware description of the 
> processor? Any other resources I should read?

Perhaps have a look at the clisp virtual machine. It would be nice if
it was implemented in hardware. ;-)

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cf2hc9$ano$1@newsreader2.netcologne.de>
Pascal Bourguignon <····@mouse-potato.com> wrote:

> I don't feel that CAR and CDR be so critical to lisp performance.
> Hardware memory management, ie. garbage collection is probably much
> more influencing the performance.

Yes, GC should be hardware accelerated, too.

>> But first I want to know how it was solved in other systems, like the
>> Symbolics Lisp Machine. Where can I find a hardware description of
>> the processor? Any other resources I should read?
> 
> Perhaps have a look at the clisp virtual machine. It would be nice if
> it was implemented in hardware. ;-)

Perhaps it is better to execute some byte code, like the CLISP bytecode:

http://clisp.cons.org/impnotes/intr-set.html

but is it really faster than an instruction set which executes Lisp 
without compiling it to byte-code, but only to an internal s-expression 
format (like a binary tree structure), which can be executed by a special 
CPU core fast? Perhaps a small set of primitives are enough, like this:

ftp://ftp.cs.cmu.edu/user/ai/lang/lisp/impl/awk/0.html

Do you know some research, which compares byte-code execution with s-
expression execution? Looks like the old war between RISC (reduced 
instruction set CPU) and CISC (complex instruction set CPU), but I think 
a Lisp instruction set could be faster and with smaller program sizes 
than what is possible with other CISC concepts.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Marco Parrone
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87d623nk6z.fsf@marc0.dyndns.org>
Frank Buss on Sat, 7 Aug 2004 12:19:21 +0000 (UTC) writes:

> Perhaps it is better to execute some byte code, like the CLISP bytecode:

What about IA32?  At the same price, will your new HW run faster than
latest Intel/AMD processors, and a lisp like CMUCL?

-- 
Marco Parrone <·····@autistici.org> [0x45070AD6]
From: Pascal Bourguignon
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87pt63m3rr.fsf@thalassa.informatimago.com>
Marco Parrone <·····@autistici.org> writes:

> Frank Buss on Sat, 7 Aug 2004 12:19:21 +0000 (UTC) writes:
> 
> > Perhaps it is better to execute some byte code, like the CLISP bytecode:
> 
> What about IA32?  At the same price, will your new HW run faster than
> latest Intel/AMD processors, and a lisp like CMUCL?

That's a good question, because for sequential algorithms, FPGA seems
quite limited in frequency.

For parallel algorithms, like neural networks, that's another
question. They have even quite a number of multipiers, and there's
memory bits spread over the array in addition to memory blocks.

But then, for neural networks, they're overkill, much too configurable.


Now, they should be taken for what they are: prototyping devices.  If
you can implement on them an architecture that, relatively, show
better performance, then you may be motivated (and may motivate your
VC) to buy the services of Intel to mass produce a real version of it.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Will Hartung
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nshocF48s5uU1@uni-berlin.de>
"Marco Parrone" <·····@autistici.org> wrote in message
···················@marc0.dyndns.org...
> Frank Buss on Sat, 7 Aug 2004 12:19:21 +0000 (UTC) writes:
>
> > Perhaps it is better to execute some byte code, like the CLISP bytecode:
>
> What about IA32?  At the same price, will your new HW run faster than
> latest Intel/AMD processors, and a lisp like CMUCL?

No probably not. The but the big difference is that Frank is interested in
the "Lisp CPU" concept vs "Lisp on IA32", which has, essentially, "been
done". He's more looking for an interesting project than anything else, plus
by having "unilimited" hardware resources, he can perhaps come up with some
ideas that are better done in hardware than in software on a GP CPU
architecture.

Let him have his fun.

Regards,

Will Hartung
(·····@msoft.com)
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfblg5$8en$1@newsreader2.netcologne.de>
Marco Parrone <·····@autistici.org> wrote:

> I just had a limited view of what he was doing: I read that Frank was
> willing to beat mainstream processors manufacturers using consumer
> tools.

probably this is not possible, but I'm still thinking that a CPU, which 
executes Lisp without intermediate assembler language, could be faster 
than a normal CPU. Of course not with a FPGA, but if produced as an ASIC 
and when clocked with the same clock as a normal CPU. Perhaps I'm wrong, 
because some Lisp compilers produces really optimized assembler code, 
nevertheless:

>   1) that having fun but not beating currently available processors
>      would be an option for him;

And writing a reader and evaluator, even in a hardware description 
language, helps me learning Lisp.

Another reason is, that I need some CPU architecture for programming the 
FPGA and implementing a normal CPU would make it more difficult for me to 
write programs, because then I need a compiler, if I want to write 
programs for the FPGA in a high-level language. There are already 
conventional solutions available, but it's more challenging to implement 
my own CPU :-)

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86llgm7am0.fsf@goldenaxe.localnet>
Frank Buss <··@frank-buss.de> writes:

> probably this is not possible, but I'm still thinking that a CPU, which 
> executes Lisp without intermediate assembler language, could be faster 
> than a normal CPU. Of course not with a FPGA, but if produced as an ASIC 
> and when clocked with the same clock as a normal CPU. Perhaps I'm wrong, 
> because some Lisp compilers produces really optimized assembler code, 
> nevertheless:

Your CPU would be crushed by general purpose RISC processors. I would
go for some RISC-like architecture with specific support for garbage
collection and perhaps some operations on tagged data.

So the CPU would provide the means to efficiently implement, say,
write barriers or whatever you GC algorithm needs. I think that is the
key to performance. But then again most current CPUs fit this
description somehow...


Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Rahul Jain
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87smauz708.fsf@nyct.net>
Julian Stecklina <··········@web.de> writes:

> So the CPU would provide the means to efficiently implement, say,
> write barriers or whatever you GC algorithm needs. I think that is the
> key to performance. But then again most current CPUs fit this
> description somehow...

Except that handling traps on these write barriers involves two context
switches... at least. Fast userspace traps would be a huge win,
especially for incremental GCs.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Frode Vatvedt Fjeld
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2hisbq2jat.fsf@vserver.cs.uit.no>
> Julian Stecklina <··········@web.de> writes:

>> So the CPU would provide the means to efficiently implement, say,
>> write barriers or whatever you GC algorithm needs. I think that is the
>> key to performance. But then again most current CPUs fit this
>> description somehow...

Rahul Jain <·····@nyct.net> writes:

> Except that handling traps on these write barriers involves two
> context switches... at least. Fast userspace traps would be a huge
> win, especially for incremental GCs.

But the need for context switching in order to handle traps has very
much to do with the nature of C operating systems (such as unix and
windows). Presumably one does not want to create a lisp CPU in order
to run unix on it.

-- 
Frode Vatvedt Fjeld
From: Thomas Lindgren
Subject: Re: Lisp in hardware
Date: 
Message-ID: <m37js5vmju.fsf@localhost.localdomain>
Frank Buss <··@frank-buss.de> writes:

> probably this is not possible, but I'm still thinking that a CPU, which 
> executes Lisp without intermediate assembler language, could be faster 
> than a normal CPU. Of course not with a FPGA, but if produced as an ASIC 
> and when clocked with the same clock as a normal CPU. Perhaps I'm wrong, 
> because some Lisp compilers produces really optimized assembler code /.../

The general experience of the 80s seems to have been that you should
follow the RISC principles if you want to go fast: a good compiler can
do optimizations which render nifty hardware useless or even
counterproductive. (Cf. VAX.) Have a look at David Ungar's thesis for
Smalltalk, or the BAM project at Berkeley for Prolog, or the various
Lisp efforts mentioned before. Or the Japanese fifth-generation
project for some examples where special hardware didn't work so well.

(The general experience of the *90s*, by the way, was that instruction
sets are irrelevant since all instructions are dynamically broken down
into micro-ops anyway. The general experience of the 00s seems to be
that circuit implementation is what really matters :-/ )

You also want to do some serious benchmarking in this case, which
makes it all the more difficult. Small and large apps behave
differently from each other, and you don't want hardware that just
does fibonacci well (or maybe you do?). But benchmarking and
characterizing a real application is a lot of work, and a set of them
even more. (Even finding them can be hard.)

Were I to do this, I would probably start by retargeting one of the
good free Lisp compilers, write a simulator (if possible using
SimpleScalar or something along those lines) and start iterating (a)
designing new instructions, (b) rewriting the compiler to take
advantage of them, (c) evaluate the effect on important pieces of
code. But that's possibly a bit beyond the scope of your project, if
only because it usually requires half a dozen graduate students and a
pile of PCs for simulation to get anywhere.

I think the suggestion someone made about cheap user space traps
sounds good. Messing around with the memory system could be
interesting in general. Special-purpose "arithmetic-like" instructions
don't seem to add that much. (Sparc has instructions for tagged adds;
how often are they used? How much does one gain?) Implementing big
pieces of functionality (like an entire GC) in hardware doesn't seem
worth it.

Finally, if we're talking configurable hardware, consider generating
program-specific, instead of language-specific, hardware. There seems
to be some success with that for certain application domains.

Best,
                        Thomas
-- 
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin
 
From: Ray Dillinger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <PvrZc.11473$54.158216@typhoon.sonic.net>
Frank Buss wrote:

> probably this is not possible, but I'm still thinking that a CPU, which 
> executes Lisp without intermediate assembler language, could be faster 
> than a normal CPU. Of course not with a FPGA, but if produced as an ASIC 
> and when clocked with the same clock as a normal CPU. Perhaps I'm wrong, 
> because some Lisp compilers produces really optimized assembler code, 
> nevertheless:

I had a few opportunities to work with the Symbolics Ivory chip,
and there were a few things about its design that I particularly
admired as a LISP chip.

First, its overall architecture was microcoded; each opcode was
associated with a sequence of microinstructions stored in a writable
table.  Sometimes, the microcoding of instructions was changed
during a program.  In practical terms, this means that if you were
doing something a lot, you'd make a special instruction for it and
from then on it would be a single opcode.  The up-side of that is
you get to use shorter programs because the coded instructions are
a kind of compression; since the current most painful bottleneck
on speed is access to memory, this could be a huge win in modern
systems. The down side is that it requires context switches to store
and write the writable part of the microcode table, which makes them
slower.  The Ivory compensated for slower context switches with....

Second, it had an on-chip context cache; this was a cache dedicated
to the context of up to 8 currently-executing programs.  When the
instruction stream said, switch to context #3, it would reload the
microcode table (actually just the writable part) and register
contents associated with context #3. In addition to tremendously
fast context switches, this reduced memory bandwidth, again. It
still swapped contexts in from memory, but most of the time, you
never noticed it context switching.  IIRC, it swapped out according
to a "clock" algorithm.  While swapping out one of its contexts,
it would give everything else a timeslice.  Then it would swap
out the next context while giving the other seven (including the
one it had just swapped in) a timeslice.

The support for tagged instructions seemed nice at the time, but
I don't think it buys much compared to RISC design.

One thing you could do  (I don't recall whether the Ivory did it)
would be to support long integer operations in hardware: it would
be nice to say in machinecode,

     LENGTHS 55 104;
     LONGADD A1 A2 ;

and have the 55-byte number starting at A1 efficiently added
to the 104-byte number starting at A2, with the result stored in
the memory pointed at by A2.


> Another reason is, that I need some CPU architecture for programming the 
> FPGA and implementing a normal CPU would make it more difficult for me to 
> write programs, because then I need a compiler, if I want to write 
> programs for the FPGA in a high-level language. There are already 
> conventional solutions available, but it's more challenging to implement 
> my own CPU :-)

The shortest route from the USA to China is a straight line.
But perhaps not the fastest or most practical route.  Just
an observation. :-)

Good Luck.

				Bear
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <uvfex1dxp.fsf@news.dtpq.com>
>>>>> On Wed, 01 Sep 2004 21:51:43 GMT, Ray Dillinger ("Ray") writes:

 Ray> I had a few opportunities to work with the Symbolics Ivory chip,
 Ray> and there were a few things about its design that I particularly
 Ray> admired as a LISP chip.  First, its overall architecture was
 Ray> microcoded; each opcode was associated with a sequence of
 Ray> microinstructions stored in a writable table.  Sometimes, the
 Ray> microcoding of instructions was changed during a program.

This is especially fascinating because the Symbolics Ivory chip 
did not have a writable microcode store.
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <c366f098.0409020612.6d887cac@posting.google.com>
Ray Dillinger <····@sonic.net> wrote in message news:<·····················@typhoon.sonic.net>...
> Frank Buss wrote:
> 
> > probably this is not possible, but I'm still thinking that a CPU, which 
> > executes Lisp without intermediate assembler language, could be faster 
> > than a normal CPU. Of course not with a FPGA, but if produced as an ASIC 
> > and when clocked with the same clock as a normal CPU. Perhaps I'm wrong, 
> > because some Lisp compilers produces really optimized assembler code, 
> > nevertheless:
> 
> I had a few opportunities to work with the Symbolics Ivory chip,
> and there were a few things about its design that I particularly
> admired as a LISP chip.
> 
> First, its overall architecture was microcoded; each opcode was
> associated with a sequence of microinstructions stored in a writable
> table.

Hmm, writeable table? The Symbolics machines before the Ivory had a writable
table. I thought the instruction set microcode for the Ivory was fixed.
From: Ray Dillinger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <hwJZc.11629$54.160870@typhoon.sonic.net>
Rainer Joswig wrote:
> Ray Dillinger <····@sonic.net> wrote in message news:<·····················@typhoon.sonic.net>...
> 

>>I had a few opportunities to work with the Symbolics Ivory chip,
>>and there were a few things about its design that I particularly
>>admired as a LISP chip.
>>
>>First, its overall architecture was microcoded; each opcode was
>>associated with a sequence of microinstructions stored in a writable
>>table.
> 
> 
> Hmm, writeable table? The Symbolics machines before the Ivory had a writable
> table. I thought the instruction set microcode for the Ivory was fixed.

???  Well, it was a Symbolics Lisp Machine, date unknown; I
thought it ran on an Ivory.  I *KNOW* it loaded the
microinstruction table on bootup though.

			Bear
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <c366f098.0409022315.b28f509@posting.google.com>
Ray Dillinger <····@sonic.net> wrote in message news:<·····················@typhoon.sonic.net>...
> Rainer Joswig wrote:
> > Ray Dillinger <····@sonic.net> wrote in message news:<·····················@typhoon.sonic.net>...
> > 
>  
> >>I had a few opportunities to work with the Symbolics Ivory chip,
> >>and there were a few things about its design that I particularly
> >>admired as a LISP chip.
> >>
> >>First, its overall architecture was microcoded; each opcode was
> >>associated with a sequence of microinstructions stored in a writable
> >>table.
> > 
> > 
> > Hmm, writeable table? The Symbolics machines before the Ivory had a writable
> > table. I thought the instruction set microcode for the Ivory was fixed.
> 
> ???  Well, it was a Symbolics Lisp Machine, date unknown; I
> thought it ran on an Ivory.  I *KNOW* it loaded the
> microinstruction table on bootup though.

Then it had not an Ivory microprocessor inside. The earlier machines
(36xx series) had a processor (it came on one or more large boards)
which was microprogrammable. But that series later was replaced
by machines based on a little microprocessor (the Ivory) which was not
microprogrammable.
From: Jan Rychter
Subject: Re: Lisp in hardware
Date: 
Message-ID: <m2wtzc72zy.fsf@tnuctip.rychter.com>
>>>>> "Ray" == Ray Dillinger <····@sonic.net> writes:
 Ray> Frank Buss wrote:
 >> probably this is not possible, but I'm still thinking that a CPU,
 >> which executes Lisp without intermediate assembler language, could
 >> be faster than a normal CPU. Of course not with a FPGA, but if
 >> produced as an ASIC and when clocked with the same clock as a normal
 >> CPU. Perhaps I'm wrong, because some Lisp compilers produces really
 >> optimized assembler code, nevertheless:

 Ray> I had a few opportunities to work with the Symbolics Ivory chip,
 Ray> and there were a few things about its design that I particularly
 Ray> admired as a LISP chip.

 Ray> First, its overall architecture was microcoded; each opcode was
 Ray> associated with a sequence of microinstructions stored in a
 Ray> writable table.  Sometimes, the microcoding of instructions was
 Ray> changed during a program.  In practical terms, this means that if
 Ray> you were doing something a lot, you'd make a special instruction
 Ray> for it and from then on it would be a single opcode.  

I believe those wanting to do this kind of stuff should take a close
look at the Stretch CPU -- see http://www.stretchinc.com/

It seems to combine the best of both worlds -- you get a general-purpose
RISC CPU to run code, and you can easily extend the instruction set with
custom instructions implemented using the reconfigurable fabric. The
extension instructions are first-class members and are pipelined. And of
course, you can reconfigure the fabric anytime you want.

This might make much more sense than a fully FPGA-based solution.

--J.
From: Pascal Bourguignon
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87llgrm1wv.fsf@thalassa.informatimago.com>
Frank Buss <··@frank-buss.de> writes:

> Pascal Bourguignon <····@mouse-potato.com> wrote:
> 
> > I don't feel that CAR and CDR be so critical to lisp performance.
> > Hardware memory management, ie. garbage collection is probably much
> > more influencing the performance.
> 
> Yes, GC should be hardware accelerated, too.
> 
> >> But first I want to know how it was solved in other systems, like the
> >> Symbolics Lisp Machine. Where can I find a hardware description of
> >> the processor? Any other resources I should read?
> > 
> > Perhaps have a look at the clisp virtual machine. It would be nice if
> > it was implemented in hardware. ;-)
> 
> Perhaps it is better to execute some byte code, like the CLISP bytecode:
> 
> http://clisp.cons.org/impnotes/intr-set.html
> 
> but is it really faster than an instruction set which executes Lisp 
> without compiling it to byte-code, but only to an internal s-expression 
> format (like a binary tree structure), which can be executed by a special 
> CPU core fast? Perhaps a small set of primitives are enough, like this:
> 
> ftp://ftp.cs.cmu.edu/user/ai/lang/lisp/impl/awk/0.html

Well, I guess it depends whether you want to implement a minimalistic
lisp, a Scheme or a COMMON-LISP.
 

> Do you know some research, which compares byte-code execution with s-
> expression execution? 

I know none, but you can check the difference any time with clisp. Just run:

(time    (your-favorite-function))
(compile 'your-favorite-function)
(time    (your-favorite-function))


> Looks like the old war between RISC (reduced 
> instruction set CPU) and CISC (complex instruction set CPU), but I think 
> a Lisp instruction set could be faster and with smaller program sizes 
> than what is possible with other CISC concepts.

Given that a Scheme eval+apply hold on one page of Scheme, that's less
than 2 or 3 KB, it should be possible to compile them into 200 KB of
FGPA.

I guess the question is whether the programs you will be processing
will have to manage only symbols and lists, or if they'll work on
something else too, like numbers, strings, arrays, etc.  Then a lower
level processor may be more efficient (with compiled code).

Actually we could even have lisp -> FGPA (parallelizing) compilers
that would generate a specific "chip" for each lisp program.  You just
have to put the limit between hardware and software somewhere.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cf3fc5$fj2$1@newsreader2.netcologne.de>
Pascal Bourguignon <····@mouse-potato.com> wrote:

> Well, I guess it depends whether you want to implement a minimalistic
> lisp, a Scheme or a COMMON-LISP.

I want to start with a minimalistic lisp, Common Lisp on a 4,320 logic 
cell FPGA with 1 MB external RAM would be very difficult :-)

>> Do you know some research, which compares byte-code execution with s-
>> expression execution? 
> 
> I know none, but you can check the difference any time with clisp.
> Just run: 
> 
> (time    (your-favorite-function))
> (compile 'your-favorite-function)
> (time    (your-favorite-function))

this compares only the compiled version and the interpreted version on my 
x86 hardware, not for a special Lisp hardware.

> I guess the question is whether the programs you will be processing
> will have to manage only symbols and lists, or if they'll work on
> something else too, like numbers, strings, arrays, etc.  Then a lower
> level processor may be more efficient (with compiled code).

why? I think a special Lisp processor can execute normal processing on 
numbers as fast as a standard processor (if clocked with the same clock), 
but should be able to do special Lisp command faster, like type 
dispatching as Barry explained.

> Actually we could even have lisp -> FGPA (parallelizing) compilers
> that would generate a specific "chip" for each lisp program.  You just
> have to put the limit between hardware and software somewhere.

Yes, this is possible, because the FPGA is loaded at power on in less 
than 1 second from Flash or any other data source, but that's too 
complicated for a starting project for me. I want to stay at the Verilog 
or VHDL layer first. Perhaps later then I'll write my own gate level 
synthesis software, which can do this.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Barry Margolin
Subject: Re: Lisp in hardware
Date: 
Message-ID: <barmar-6A73C6.14155007082004@comcast.dca.giganews.com>
In article <··················@ram.dialup.fu-berlin.de>,
 ···@zedat.fu-berlin.de (Stefan Ram) wrote:

> Frank Buss <··@frank-buss.de> writes:
> >If the processor knows the concept of a list and perhaps has
> >low-level commands for CAR and CDR in hardware, it should be
> >really fast.
> 
>   The IBM 704 had a partial-word instruction to reference the
>   "address" (15 Bits) and "decrement" (15 Bits) part of a 
>   machine location (36 Bits). The names "CAR" and "CDR" have
>   been derived from this. So the first LISP implementation
>   already had such commands in hardware.

CAR and CDR are basically just ordinary memory accesses.  The only 
special hardware needed for them is if you want to accelerate fancy 
features like CDR-coding (which reduces memory overhead for lists) or 
auto-forwarding (which is useful for some GC designs, array adjustments, 
CHANGE-CLASS, and object aliasing schemes).

Probably the most significant distinguishing feature of Lisp Machines is 
their support for type dispatching.  When you invoke an arithmetic 
operator, the data is fed in parallel to the fixnum unit and the 
floating-point co-processor, and in parallel with this the microcode 
checks the type tags.  It then either latches the result register from 
the appropriate hardware unit, or traps out to a software function that 
handles all the other types.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nm4eeF27fb0U1@uni-berlin.de>
Barry Margolin <······@alum.mit.edu> wrote:
> Probably the most significant distinguishing feature of Lisp Machines is 
> their support for type dispatching.  When you invoke an arithmetic 

That's why Pico Lisp supports only three data types, using bits 1 and 2
in each pointer as tag bits (bit 0 is reserved for the garbage
collector):

      xxxxxxxxxxxxxxxxxxxxxxxxxxxxx010 Number pointer
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxx100 Symbol pointer
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxx000 Cell pointer

This results in pointers to the CAR of cells:

      cell
      |
      V
      +-----+-----+
      | CAR | CDR |
      +-----+-----+


and an offset of 4 bytes for symbols, giving direct acces to the
symbol's value cell:

            sym
            |
            V
      +-----+-----+
      |  |  | VAL |
      +--+--+-----+
         | tail
         V

Note that this applies to "pure" Pico Lisp. The current production
version differs slightly from the above because of its support for
persistent external symbols as an additional data type.


- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <joswig-95D50D.20581807082004@news-50.dca.giganews.com>
In article <············@newsreader2.netcologne.de>,
 Frank Buss <··@frank-buss.de> wrote:

> I want to implement a processor core in a Xilinx FPGA 
> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
> be possible to do something like Pico Lisp, which is not a normal compiler, 
> but more an interpreter:
> 
> http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
> http://software-lab.de/ref.html
> 
> If the processor knows the concept of a list and perhaps has low-level 
> commands for CAR and CDR in hardware, it should be really fast.
> 
> But first I want to know how it was solved in other systems, like the 
> Symbolics Lisp Machine. Where can I find a hardware description of the 
> processor? Any other resources I should read?

See the discussion here:

  http://groups.yahoo.com/group/lispmachines/

A project that currently is being in discussion is a virtual
machine for Lisp that would be able to run on bare hardware
and on some host OS. The VM should be portable and thus
not bound to x86 (I hope that my G5 Mac will be able to use it ;-) ).
The target would be a 'simple' OS with some HAL (hardware abstraction
layer) in Common Lisp. Some people already have experience with
similar stuff.
From: Alexander Schreiber
Subject: Re: Lisp in hardware
Date: 
Message-ID: <slrnchc7pr.mfh.als@mordor.angband.thangorodrim.de>
Rainer Joswig <······@lisp.de> wrote:
> In article <············@newsreader2.netcologne.de>,
>  Frank Buss <··@frank-buss.de> wrote:
> 
>> I want to implement a processor core in a Xilinx FPGA 
>> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
>> be possible to do something like Pico Lisp, which is not a normal compiler, 
>> but more an interpreter:
>> 
>> http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
>> http://software-lab.de/ref.html
>> 
>> If the processor knows the concept of a list and perhaps has low-level 
>> commands for CAR and CDR in hardware, it should be really fast.
>> 
>> But first I want to know how it was solved in other systems, like the 
>> Symbolics Lisp Machine. Where can I find a hardware description of the 
>> processor? Any other resources I should read?
> 
> See the discussion here:
> 
>   http://groups.yahoo.com/group/lispmachines/

Oh, somebody decided to re-invent the lispm-hackers mailinglist?

http://lists.unlambda.com/mailman/listinfo/lispm-hackers

> A project that currently is being in discussion is a virtual
> machine for Lisp that would be able to run on bare hardware
> and on some host OS. The VM should be portable and thus
> not bound to x86 (I hope that my G5 Mac will be able to use it ;-) ).
> The target would be a 'simple' OS with some HAL (hardware abstraction
> layer) in Common Lisp. Some people already have experience with
> similar stuff.

There is currently a Lisp environment running on the bare metal of x86
machines: Movitz.

http://www.common-lisp.net/project/movitz/

Regards,
       Alex.
-- 
"Opportunity is missed by most people because it is dressed in overalls and
 looks like work."                                      -- Thomas A. Edison
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <ubrhm8ba9.fsf@news.dtpq.com>
>>>>> On Sat, 7 Aug 2004 11:17:47 +0000 (UTC), Frank Buss ("Frank") writes:

 Frank> I want to implement a processor core in a Xilinx FPGA 
 Frank> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
 Frank> be possible to do something like Pico Lisp, which is not a normal compiler, 
 Frank> but more an interpreter:

 Frank> http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
 Frank> http://software-lab.de/ref.html

 Frank> If the processor knows the concept of a list and perhaps has
 Frank> low-level commands for CAR and CDR in hardware, it should be really fast.

CAR/CDR instructions are not in themselves a big deal, 
and have been implemented in hardware more than once.
(That's where the names CAR and CDR come from!)

 Frank> But first I want to know how it was solved in other systems,
 Frank> like the Symbolics Lisp Machine. Where can I find a hardware
 Frank> description of the processor? Any other resources I should read?

Googling for "Symbolics Lisp Machine" returns 3,170 hits,
and I see lots of pages and papers there which you should
read if you're interested.
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <usmay6unz.fsf@news.dtpq.com>
>>>>> On Sat, 7 Aug 2004 11:17:47 +0000 (UTC), Frank Buss ("Frank") writes:
 Frank> I want to implement a processor core in a Xilinx FPGA 
 Frank> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
 Frank> be possible to do something like Pico Lisp, which is not a normal compiler, 
 Frank> but more an interpreter:

 Frank> http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
 Frank> http://software-lab.de/ref.html

 Frank> If the processor knows the concept of a list and perhaps has
 Frank> low-level commands for CAR and CDR in hardware, it should be really fast.

CAR/CDR instructions are not in themselves a big deal, 
and have been implemented in hardware more than once.
(That's where the names CAR and CDR come from!)

 Frank> But first I want to know how it was solved in other systems,
 Frank> like the Symbolics Lisp Machine. Where can I find a hardware
 Frank> description of the processor? Any other resources I should read?

Googling for "Symbolics Lisp Machine architecture" returns 3,170 hits,
and I see lots of pages and papers there which you should
read if you're interested.

-- 
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <ur7qhlh64.fsf@news.dtpq.com>
>>>>> On 8 Aug 2004 08:37:12 -0700, Mark A Washburn ("Mark") writes:
 Mark> Specifically, any definition of CDR is meaningless
 Mark>  without a prior definition of CAR.

I think what's important is the definition of a cons cell,
and then you could implement CAR, CDR, or both of them.

For example, on the PDP-10, machine words are 36 bits,
machine pointers are 18 bits, and there are machine
instructions for taking the CAR (HLRZ) and CDR (HRRZ)
quantities into a full word destination.  

(The PDP-10 also features addressing modes, stack manipulation, 
and other features that make it, not accidently, well suited for Lisp.
It was the main Lisp platform during the heydey decades of Lisp before
Lisp Machines.  The PDP-10 also had instructions called LDB and DPB,
by the way, which is where Common Lisp gets those.)
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nm3bdF279i0U1@uni-berlin.de>
Frank Buss <··@frank-buss.de> wrote:
> I want to implement a processor core in a Xilinx FPGA 
> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
> be possible to do something like Pico Lisp, which is not a normal compiler, 
> but more an interpreter:

Actually, this was my initial intention when I developed Pico
Lisp in the late 80s. To build that virtual machine in hardware.

I soon realized, however, that it wouldn't make much sense.
A hardware implemenation - or implementing the interpreter in
microcode - would probably not be much faster than an
interpreter written in C or assembly, because the interpreter
will completely fit into the processor cache and leave the
bottlenecks with the memory accesses.

The only thing left would be the beauty and simplicity of the
architecture.


> But first I want to know how it was solved in other systems, like the 
> Symbolics Lisp Machine. Where can I find a hardware description of the 

I believe that these machines were not really "Lisp Machines",
because they had a normal processor architecture with just a few
instructions optimized for compiled Lisp code. And such code is
IMHO not "Lisp", because it was transformed into another
language. I breaks, for example, the fundamental principle of
the equivalence of code and data.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <uhdre6kti.fsf@news.dtpq.com>
>>>>> On 8 Aug 2004 07:34:07 GMT, Alexander Burger ("Alexander") writes:
 Alexander> Frank Buss <··@frank-buss.de> wrote:
 >> But first I want to know how it was solved in other systems, like the 
 >> Symbolics Lisp Machine. Where can I find a hardware description of the 

 Alexander> I believe that these machines were not really "Lisp Machines",
 Alexander> because they had a normal processor architecture with just a few
 Alexander> instructions optimized for compiled Lisp code. And such code is
 Alexander> IMHO not "Lisp", because it was transformed into another
 Alexander> language. I breaks, for example, the fundamental principle of
 Alexander> the equivalence of code and data.

When people say the proper noun "Lisp Machine", they are generally
referring to the MIT Lisp Machine (eg. the CADR) and the follow-on
machines created by the same people at the commercial spin-offs,
Symbolics and LMI.   (Xerox also had Lisp workstations that could
rightly be called "Lisp machines", but around MIT we called those
"D Machines", and I believe that's also what Xerox called them.)

Your point, though, is that "Lisp Machine" was an inaccurate name.
You seem to be asserting that those machines were "normal" and
that they were not well optimized for compiling (executing?) Lisp.

Symbolics Lisp Machines included three different machine architectures: 
CADR (the original MIT design), the 3600, and Ivory.  I programmed
extensively on all of those, but did not need know much about their
implementation to do so; I'm more familiar with the first and last.
I am mostly interested in Ivory, which was the zenith of the Lisp
Machine architecture.  Ivory was also ultimately ported from Symbolics
hardware to software, executing a very high-performance Ivory emulation
(on the only hot 64-bit hardware then available -- the DEC Alpha) 
where it was called the VLM ("Virtual Lisp Machine" aka "Open Genera").

While the CADR in particular could be thought of as a general-purpose
machine, it was most certainly designed to be good for executing Lisp.
The later machines reflected more design experience and had even 
more optimizations.  The architecuture of all of these machines was
unique enough that I would not call any of them "normal", compared
to other CPUs at the time or todays architectures.

Lisp is a general purpose language, in the larger picture relatively
unconcerned with CARs and CDRs, so it is not surprising that a Lisp
Machine is a good general purpose machine.  I would take issue with
your suggestion that it is the number of instructions that matters,
while also asking what you counted and what numbers you came up with.
It seems to me that the support for data type (that's a mouthfull),
calling conventions, and GC, are more important overall.  But all of
the machines did have instructions for list operations, for example.

I am very unclear on your notion of "equivalence" of code and data;
in particular how this was "broken" by those machines.

Could describe the alternate machine architecture that you have in
mind, show examples of how you would compile it, and contrast that
with how it would be more "optimized for compiling [I assume you 
really mean executing] Lisp code" than the realized Lisp Machines
mentioned above?  If this is to be a virtual machine, a comparison
with the VLM would be most interesting, especially an analysis of
how the emulator will perform on the targetted real hardware.

I would also be interested in an analysis of your machine architecture
versus the Xerox D Machines (which I know very little about, but which
I suspect had more Lisp machine instructions than the CADR).
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nmjtaF2bi4hU1@uni-berlin.de>
Christopher C. Stacy <······@news.dtpq.com> wrote:
> >>>>> On 8 Aug 2004 07:34:07 GMT, Alexander Burger ("Alexander") writes:
>  Alexander> I believe that these machines were not really "Lisp Machines",
>  Alexander> because they had a normal processor architecture with just a few
>  Alexander> instructions optimized for compiled Lisp code. And such code is
>  Alexander> IMHO not "Lisp", because it was transformed into another
>  Alexander> language. I breaks, for example, the fundamental principle of
>  Alexander> the equivalence of code and data.

> Your point, though, is that "Lisp Machine" was an inaccurate name.

Yes.

> You seem to be asserting that those machines were "normal" and
> that they were not well optimized for compiling (executing?) Lisp.

No, I think they were highly optimized for Lisp, probably providing
the fastest execution of Lisp programs at that time.

But they did not execute Lisp (i.e. s-expressions), but an equivalent
program which resulted from a translation (compilation) process.


> Symbolics Lisp Machines included three different machine architectures: 
> ...
> While the CADR in particular could be thought of as a general-purpose
> ...

I do not really know the architectures and specialities of these
machines, but from everything I read until now none of them implemented
a Lisp machine in the sense I tried to explain.

This does not mean that these machines were not very good systems, and
perhaps the best you could get to run Lisp programs. They were also very
complicated.


I think the confusion in our discussion arises because I distinguish
between the Lisp _Language_ and a Lisp _Machine_.


> I am very unclear on your notion of "equivalence" of code and data;
> in particular how this was "broken" by those machines.

On those machines code (machine code) is different from data
(s-expressions). You might claim that the original Lisp code is still
around, but it is not the code being executed.


> Could describe the alternate machine architecture that you have in

   http://software-lab.de/ref.html

> mind, show examples of how you would compile it, and contrast that

No compilation. That's an important point for me. Actually, what I'm
striving for is a system which is so simple that I can understand every
aspect of it at any time, making it easy for me to extend it in any
direction. Compilation complicates everything considerably.

I use Pico Lisp for my daily work, and it gives me every freedom I
desire.

> with how it would be more "optimized for compiling [I assume you 
> really mean executing] Lisp code" than the realized Lisp Machines

Yes.

> mentioned above?  If this is to be a virtual machine, a comparison
> with the VLM would be most interesting, especially an analysis of
> how the emulator will perform on the targetted real hardware.

I do not talk about performance. Raw execution speed will always be best
with a compiled Lisp on a modern hardware. But raw execution speed
doesn't matter very often. And if it does, I just put another opcode (a
function written in C) into the virtual machine.


> I would also be interested in an analysis of your machine architecture
> versus the Xerox D Machines (which I know very little about, but which
> I suspect had more Lisp machine instructions than the CADR).

It would be interesting to learn more about that machine.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86r7qhmaxj.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

[Lisp Machines]
> But they did not execute Lisp (i.e. s-expressions), but an equivalent
> program which resulted from a translation (compilation) process.

So why is this bad? Except the one-time overhead of compilation this
"equivalent" code beats "interpreting s-epxressions" hands-down...

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2noui2F30sa2U1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:

> [Lisp Machines]
> > But they did not execute Lisp (i.e. s-expressions), but an equivalent
> > program which resulted from a translation (compilation) process.

> So why is this bad? Except the one-time overhead of compilation this
> "equivalent" code beats "interpreting s-epxressions" hands-down...

This is bad for several reasons:

1. The resulting program is not necessarily 100% equivalent.

2. Many of my applications consist of plenty of one-time code, typically
   GUI components. The file is loaded, executed, produces some side
   effects (Applet layout), and is thrown away. A compilation pass would
   be counter-productive.

3. The GUI components contain many little s-expressions (function bodies
   or just simple lists) which may be later evaluated depending on user
   actions like mouse click and button press. I doubt it makes sense to
   compile such code fragments, which even cannot be meaningfully
   executed outside their dynamic environment.

4. It has great advantages to execute s-expressions. Not so much in
   application programming, because the often-cited self-modifying Lisp
   programs are rare. But for the development of the Lisp system itself.

   I want full control over my programming system. It has to be simple,
   so that I can understand and modify every aspect of it.

   For example, tracing is done in Pico Lisp by cons'ing a trace symbol
   in front of function or method bodies. The debugger (single stepper)
   consists of a few dozen lines which constantly modify s-expressions
   while they are executed. The same holds for the profiler.

It is all so simple if you dare to abandon the compiler. Many lisp-ish
(in my opinion) programming styles like (3) you don't even consider if
you are caught in monolithic lexical compiler think.


> Regards,
> -- 
>                     ____________________________
>  Julian Stecklina  /  _________________________/
>   ________________/  /
>   \_________________/  LISP - truly beautiful

sincerely,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <c366f098.0408090526.cc8eb4f@posting.google.com>
Alexander Burger <···@software-lab.de> wrote in message news:<··············@uni-berlin.de>...
> Julian Stecklina <··········@web.de> wrote:
> > Alexander Burger <···@software-lab.de> writes:
>  
> > [Lisp Machines]
> > > But they did not execute Lisp (i.e. s-expressions), but an equivalent
> > > program which resulted from a translation (compilation) process.
>  
> > So why is this bad? Except the one-time overhead of compilation this
> > "equivalent" code beats "interpreting s-epxressions" hands-down...
> 
> This is bad for several reasons:
> 
> 1. The resulting program is not necessarily 100% equivalent.
> 
> 2. Many of my applications consist of plenty of one-time code, typically
>    GUI components. The file is loaded, executed, produces some side
>    effects (Applet layout), and is thrown away. A compilation pass would
>    be counter-productive.
> 
> 3. The GUI components contain many little s-expressions (function bodies
>    or just simple lists) which may be later evaluated depending on user
>    actions like mouse click and button press. I doubt it makes sense to
>    compile such code fragments, which even cannot be meaningfully
>    executed outside their dynamic environment.
> 
> 4. It has great advantages to execute s-expressions. Not so much in
>    application programming, because the often-cited self-modifying Lisp
>    programs are rare. But for the development of the Lisp system itself.
> 
>    I want full control over my programming system. It has to be simple,
>    so that I can understand and modify every aspect of it.
> 
>    For example, tracing is done in Pico Lisp by cons'ing a trace symbol
>    in front of function or method bodies. The debugger (single stepper)
>    consists of a few dozen lines which constantly modify s-expressions
>    while they are executed. The same holds for the profiler.
> 
> It is all so simple if you dare to abandon the compiler. Many lisp-ish
> (in my opinion) programming styles like (3) you don't even consider if
> you are caught in monolithic lexical compiler think.

Alex, it would interest me to know, how your system is different
from other 'normal' interpreters for Lisp?

- is there something different during interpretation? I thought
  that the usual interpreter also walks over the internal representation
  that the Lisp system uses for s-expressions.

- is it that the source code is represented in the system not as
  text in a filesystem, but as internal data structures in
  the run-time? This has been done for example with Interlisp-D,
  where you edit your code with a structure editor.

one of the above? something else?  

Rainer Joswig
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npfjlF38h3lU1@uni-berlin.de>
Rainer Joswig <······@corporate-world.lisp.de> wrote:
> Alex, it would interest me to know, how your system is different
> from other 'normal' interpreters for Lisp?

> - is there something different during interpretation? I thought
>   that the usual interpreter also walks over the internal representation
>   that the Lisp system uses for s-expressions.

Yes, that's the same. But I don't know what a "normal" interpreter is.
Pico Lisp decided not to have to care about support for a compiler. This
has some important consequences:

1. Each built-in function can simply get passed the whole unevaluated
   argument list, and can decide by itself what to do with these
   arguments. Thus, all functions are treated uniformly and can be
   invoked very fast. These functions can do very different things, like
   'setq' evaluating only every other argument, 'if' behaving
   conditionally, 'while' evaluating its arguments repeatedly, and so
   on.

   If a compiler has to be supported, it needs to know about the
   built-in functions and must pass already evaluated arguments to them.
   For that the dispatching mechanism in the interpreter is more time
   consuming.

   Actually, I've never seen any other Lisp using such a function call
   strategy.

2. Completely stay with the dynamic (shallow) binding strategy. This
   strategy is considered bad today, because people got used to static
   thinking in other compiled languages from the Alogol -> C families. I
   love it, because it gives more power to the programmer (which
   unfortunately some people also regard as bad).

   Dynamic binding is very fast in a Lisp interpreter. If you also
   support a compiler, you'll want lexical binding because it is faster
   then, but for compatibility the interpreter also must bind lexically.
   Than it will get slow.

3. Only the minimum set of data types: Numbers (Bignums), Symbols and
   Cells (Lists). Then the tags can reside in the low order bits of
   32-bit-pointers without causing trouble when dereferencing these
   pointers.


So, hm, what's the difference? I don't really know, perhaps it is only
that it is "simpler" (whatever that means).

> Rainer Joswig

Servus,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <joswig-6D3E89.19085409082004@individual.net>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> Rainer Joswig <······@corporate-world.lisp.de> wrote:
> > Alex, it would interest me to know, how your system is different
> > from other 'normal' interpreters for Lisp?
> 
> > - is there something different during interpretation? I thought
> >   that the usual interpreter also walks over the internal representation
> >   that the Lisp system uses for s-expressions.
> 
> Yes, that's the same. But I don't know what a "normal" interpreter is.

Like MCL's, ACL's, LispWorks's, CMUCL's, and probably hundred
other interpreters.

> Pico Lisp decided not to have to care about support for a compiler. This
> has some important consequences:
> 
> 1. Each built-in function can simply get passed the whole unevaluated
>    argument list, and can decide by itself what to do with these
>    arguments. Thus, all functions are treated uniformly and can be
>    invoked very fast. These functions can do very different things, like
>    'setq' evaluating only every other argument, 'if' behaving
>    conditionally, 'while' evaluating its arguments repeatedly, and so
>    on.
> 
>    If a compiler has to be supported, it needs to know about the
>    built-in functions and must pass already evaluated arguments to them.
>    For that the dispatching mechanism in the interpreter is more time
>    consuming.
> 
>    Actually, I've never seen any other Lisp using such a function call
>    strategy.

That's basically the old FEXPR stuff. I was using stuff this
twenty years ago on an Apple II.

I think Kent Pitman's paper describes stuff like that from the history.

http://www.nhplace.com/kent/Papers/Special-Forms.html

> 
> 2. Completely stay with the dynamic (shallow) binding strategy. This
>    strategy is considered bad today, because people got used to static
>    thinking in other compiled languages from the Alogol -> C families. I
>    love it, because it gives more power to the programmer (which
>    unfortunately some people also regard as bad).
> 
>    Dynamic binding is very fast in a Lisp interpreter. If you also
>    support a compiler, you'll want lexical binding because it is faster
>    then, but for compatibility the interpreter also must bind lexically.
>    Than it will get slow.

Lexical binding often is seen as less error prone, semantically
cleaner and nearer to Lambda Calculus.

> 3. Only the minimum set of data types: Numbers (Bignums), Symbols and
>    Cells (Lists). Then the tags can reside in the low order bits of
>    32-bit-pointers without causing trouble when dereferencing these
>    pointers.

I guess you can get around this. There was once a paper that described
the Lisp support in the SPARC architecture that supports tags
like this for Lucid CL.

> So, hm, what's the difference? I don't really know, perhaps it is only
> that it is "simpler" (whatever that means).

Hmm, then it seems to me that I have seen a multitude of systems
(interpreter + dynamic binding + few datatypes)
like that. The original XLisp for example would kind of fit
the bill.

Rainer Joswig
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nptrbF3fi8kU1@uni-berlin.de>
Rainer Joswig <······@lispmachine.de> wrote:
> In article <··············@uni-berlin.de>,
>  Alexander Burger <···@software-lab.de> wrote:
> > 1. Each built-in function can simply get passed the whole unevaluated
> >    argument list, and can decide by itself what to do with these
> >    arguments. Thus, all functions are treated uniformly and can be
> >    invoked very fast. These functions can do very different things, like
> >    'setq' evaluating only every other argument, 'if' behaving
> >    conditionally, 'while' evaluating its arguments repeatedly, and so
> >    on.
> > 
> >    If a compiler has to be supported, it needs to know about the
> >    built-in functions and must pass already evaluated arguments to them.
> >    For that the dispatching mechanism in the interpreter is more time
> >    consuming.
> > 
> >    Actually, I've never seen any other Lisp using such a function call
> >    strategy.

> That's basically the old FEXPR stuff. I was using stuff this

No, FEXPRs are on the Lisp level, not the built-in. Perhaps FSUBRs?

I disassembled several Lisp systems back then, and found none using that
on the SUBR level. But this doesn't matter, I did not want to say that
using that strategy is especially clever, it just doesn't go together
with a compiler if you use it exclusively and uniformly for all built-in
functions.



> > 2. Completely stay with the dynamic (shallow) binding strategy. This
> >    strategy is considered bad today, because people got used to static
> >    thinking in other compiled languages from the Alogol -> C families. I
> >    love it, because it gives more power to the programmer (which
> >    unfortunately some people also regard as bad).
> > 
> >    Dynamic binding is very fast in a Lisp interpreter. If you also
> >    support a compiler, you'll want lexical binding because it is faster
> >    then, but for compatibility the interpreter also must bind lexically.
> >    Than it will get slow.

> Lexical binding often is seen as less error prone, semantically
> cleaner and nearer to Lambda Calculus.

I agree that it is error prone. But very flexible. In Pico Lisp we use
transient symbols in these cases, which are basically equivalent in
scope to 'static' identifiers in C-like languages. The advantage is that
the programmer can fine-control the scopes.


> Hmm, then it seems to me that I have seen a multitude of systems
> (interpreter + dynamic binding + few datatypes)
> like that. The original XLisp for example would kind of fit
> the bill.

Sure. So what's the problem? I did not claim that Pico Lisp is anything
special. _You_ asked for differences. I did not start this thread.

What I want is just a system so simple that I can control every aspect
of it, and build up complexities at higher levels as demanded, instead
of overloading the base system. If you start with a fully bloated system
it is very hard to tune it according to your needs. For example, how
would you insert persistent objects as a first-class datatype into
Common Lisp?

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86ekmeoceg.fsf@web.de>
Alexander Burger <···@software-lab.de> writes:

> For example, how
> would you insert persistent objects as a first-class datatype into
> Common Lisp?

"PLOB! (Persistent Lisp OBjects!) is an Object Database, implementing
orthogonal persistency for LISP and CLOS objects. It also contains
important database features like transactions, locking and associative
search over persistent objects. "

From a PLOB! example:

(defclass count-instances ()

  ((instances
    :accessor instance-count
    :allocation :class
    :initform 0
    :type fixnum
    :documentation "
 The value in this slot is persistent. It is incremented for each
 transient instance created."))

  (:metaclass persistent-metaclass))

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nu68oF4j4kvU1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:

> > For example, how
> > would you insert persistent objects as a first-class datatype into
> > Common Lisp?

> "PLOB! (Persistent Lisp OBjects!) is an Object Database, implementing

I was talking of first-class datatypes.

And in the main run, about simplicity. 50 Megabytes don't look very
simple.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Marco Antoniotti
Subject: Re: Lisp in hardware
Date: 
Message-ID: <Q1rSc.12$D5.6227@typhoon.nyu.edu>
Alexander Burger wrote:

> Julian Stecklina <··········@web.de> wrote:
> 
>>Alexander Burger <···@software-lab.de> writes:
> 
> 
>>>For example, how
>>>would you insert persistent objects as a first-class datatype into
>>>Common Lisp?
> 
> 
>>"PLOB! (Persistent Lisp OBjects!) is an Object Database, implementing
> 
> 
> I was talking of first-class datatypes.

The class defined with :metaclass is a first-class class, hence a datatype.

> And in the main run, about simplicity. 50 Megabytes don't look very
> simple.

That depends.  To me simplicity is also simplicity of programming.

Cheers
--
Marco
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <868ycllnj8.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

> Julian Stecklina <··········@web.de> wrote:
>> Alexander Burger <···@software-lab.de> writes:
>
>> > For example, how
>> > would you insert persistent objects as a first-class datatype into
>> > Common Lisp?
>
>> "PLOB! (Persistent Lisp OBjects!) is an Object Database, implementing
>
> I was talking of first-class datatypes.

Why is it not? You can pass it around, you can return it, modify
it... It is as first-class as every CL object.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nv4v3F530tbU1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:

> > Julian Stecklina <··········@web.de> wrote:
> > I was talking of first-class datatypes.

> Why is it not? You can pass it around, you can return it, modify
> it... It is as first-class as every CL object.

I want more than just "every CL object". I want it to be a separate
type, so that for example the garbage collector can do special things
with it (like not always disposing of it, despite it is currently not
referenced).

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86acx1z1ib.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

[Persistent first-class objects]
> I want more than just "every CL object". I want it to be a separate
> type, so that for example the garbage collector can do special things
> with it (like not always disposing of it, despite it is currently not
> referenced).

So you are trading flexibility for speed? :)

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o0qscF5k2bcU1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:

> [Persistent first-class objects]
> > I want more than just "every CL object". I want it to be a separate
> > type, so that for example the garbage collector can do special things
> > with it (like not always disposing of it, despite it is currently not
> > referenced).

> So you are trading flexibility for speed? :)

Thanks. Now you give me another opportunity to write something which
will cause other people to accuse me of being contradictory.

Yes, I will trade flexibility for speed if it is necessary. For example,
I might code some part of an application in 'C' if it is time-critical.
I assume that Lisp users understand why I regard a module designed and
coded in 'C' as less flexible than the same functionality implemented in
Lisp.

In other cases I trade speed for flexibility (discussed in this thread),
if the speed is sufficient, and the gain in flexibility increases my
productivity as a programmer. I need such freedom of decision.


However, I don't understand your question in the cited context. The
above design decision had to do neither with speed nor flexibility, but
is a requirement for the intended functionality. In fact, the system
looses some speed because of the additional type.

>                     ____________________________
>  Julian Stecklina  /  _________________________/
>   ________________/  /
>   \_________________/  LISP - truly beautiful

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86wu011zs6.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

>> So you are trading flexibility for speed? :)

[..]

> However, I don't understand your question in the cited context. The
> above design decision had to do neither with speed nor flexibility, but
> is a requirement for the intended functionality. In fact, the system
> looses some speed because of the additional type.

But why do you need garbage collector magic for persistent objects, if
not for speed?

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o95a9F898b9U1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:
> > However, I don't understand your question in the cited context. The
> > above design decision had to do neither with speed nor flexibility, but
> > is a requirement for the intended functionality. In fact, the system
> > looses some speed because of the additional type.

> But why do you need garbage collector magic for persistent objects, if
> not for speed?

Traditionally, a Lisp symbol is either "interned" or "uninterned".
Internal symbols are kept in some hashed or indexed data structure
("oblist") and therefore will not be removed during garbage collection.
Uninterned symbols will be removed as soon as there is no reference
pointing to them.

In addition, Pico Lisp supports that special type of persistent symbols,
called "external" symbols, which are physically just like normal symbols
(having a name, a value cell and a property list), but which are treated
differently by the system.

Whenever such a symbol's value cell or property list is accessed for the
first time (e.g. 'eval' or 'get'), it is fetched automatically from the
database. Modifications with functions like 'set' or 'put' modify the
symbol in the normal way, but keep all changes in memory until a call to
'commit' or 'rollback'.

The knowledge about external symbols goes through the whole system. Most
primitives (not just the gc) know about it. You can pass a message to
(or invoke a method on) a symbol, no matter whether it is currently in
memory or not.

While in memory, external symbols are kept in their own special
"oblist". All objects in the database are connected to each other via
links, joints, and tree data structures, reachable from a single root
object.

Now the garbage collector has to obey special rules:

1. Despite these symbols having references from their "oblist", they are
   _allowed_ to be removed from memory if they have no other references.

2. They are _not_ allowed to be removed when they were modified but not
   committed yet, even though they may not be referred to from anywhere.

3. Because all external symbols refer to each other, a normal garbage
   collector would be unable to find unused symbols, and the memory
   consumption would grow without end while more and more symbols were
   fetched from the database. Therefore, the garbage collector knows how
   to temporarily ignore certain references.

Regards,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86pt6060g8.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

[Compiled code]
> This is bad for several reasons:
>
> 1. The resulting program is not necessarily 100% equivalent.

Your compiler is broken. ;)

> 2. Many of my applications consist of plenty of one-time code, typically
>    GUI components. The file is loaded, executed, produces some side
>    effects (Applet layout), and is thrown away. A compilation pass would
>    be counter-productive.

Many CLs have a compiler and an interpreter, if that really is a
concern to you. They are totally transparent to the user as well. If
at work I get thrown into the debugger because a GUI handler has a
bug, I can fix it and restart the frame. It is meaningless whether
this handler was compiled or not.

>    For example, tracing is done in Pico Lisp by cons'ing a trace symbol
>    in front of function or method bodies. The debugger (single stepper)
>    consists of a few dozen lines which constantly modify s-expressions
>    while they are executed. The same holds for the profiler.

Simplicity is nice to have, agreed. But living without strings and
vectors per design...

> It is all so simple if you dare to abandon the compiler. Many lisp-ish
> (in my opinion) programming styles like (3) you don't even consider if
> you are caught in monolithic lexical compiler think.

Why should you consider?

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npbccF3871cU1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:

> [Compiled code]
> > This is bad for several reasons:
> >
> > 1. The resulting program is not necessarily 100% equivalent.

> Your compiler is broken. ;)

Ok, probably it is no problem nowadays. As you may have noticed, my last
contact with (non-self-written) Lisp was nearly 20 years ago. Most Lisps
at that time used dynamic binding for the interpreter and lexical
binding for the compiler, introducing significant differences.

Then it depends on how you define "equivalent". Identical output for any
given input? I strongly doubt that, considering the complexity of that
matter.


> > 2. Many of my applications consist of plenty of one-time code, typically
> >    GUI components. The file is loaded, executed, produces some side
> >    effects (Applet layout), and is thrown away. A compilation pass would
> >    be counter-productive.

> Many CLs have a compiler and an interpreter, if that really is a
> concern to you. They are totally transparent to the user as well. If
> at work I get thrown into the debugger because a GUI handler has a
> bug, I can fix it and restart the frame. It is meaningless whether
> this handler was compiled or not.

Then you have the double disadvantage of

- a complicated system (because it has to support the compiler)

- and inefficient execution (because the interpreter will run with a
  lexical binding strategy to stay compatible with the (now unused)
  compiler. Lexical binding is not efficient in an interpreter).


> Simplicity is nice to have, agreed. But living without strings and
> vectors per design...

Where do you absolutely need vectors?

Concerning strings, I agree. Pico Lisp uses Symbols for strings, if that
is enough for you. However, it supports amost no direct access to the
characters, only to the first one, or after you exploded them into a
list. I feel I can live very well with that. After unpacking, you have
the full power of Lisp for manipulating the characters.


> > It is all so simple if you dare to abandon the compiler. Many lisp-ish
> > (in my opinion) programming styles like (3) you don't even consider if
> > you are caught in monolithic lexical compiler think.

> Why should you consider?

To bring more fun to programming.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Paul F. Dietz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <Z8KdnSu2hMEPH4rcRVn-iA@dls.net>
Alexander Burger wrote:

> Ok, probably it is no problem nowadays. As you may have noticed, my last
> contact with (non-self-written) Lisp was nearly 20 years ago. Most Lisps
> at that time used dynamic binding for the interpreter and lexical
> binding for the compiler, introducing significant differences.
> 
> Then it depends on how you define "equivalent". Identical output for any
> given input? I strongly doubt that, considering the complexity of that
> matter.

The random tester in the gcl ansi-tests suite tests for just this property.
It builds random lambda expressions (from a subset of Common Lisp) and compares
their values on a variety of random inputs and different choices of compilation
vs. not-explicitly-compiled evaluation (and various OPTIMIZE settings).

This approach has been surprisingly effective at finding bugs, and in most cases
implementors have been quite good about fixing them.

I'll add that the fact that the implementation may have bugs that in (rare)
instances makes compiled and uncompiled code behave differently
is not the same as the language having major features that behave
differently in the two (as those paleolisps did.)  You're asking
(in effect) for the system to be bug-free.

That's *my* schtick, buddy! ;)

	Paul
From: Matthew Danish
Subject: Re: Lisp in hardware
Date: 
Message-ID: <20040809161934.GG15746@mapcar.org>
On Mon, Aug 09, 2004 at 01:09:34PM +0000, Alexander Burger wrote:
> Julian Stecklina <··········@web.de> wrote:
> > Alexander Burger <···@software-lab.de> writes:
> 
> > [Compiled code]
> > > This is bad for several reasons:
> > >
> > > 1. The resulting program is not necessarily 100% equivalent.
> 
> > Your compiler is broken. ;)
> 
> Ok, probably it is no problem nowadays. As you may have noticed, my last
> contact with (non-self-written) Lisp was nearly 20 years ago. Most Lisps
> at that time used dynamic binding for the interpreter and lexical
> binding for the compiler, introducing significant differences.

That was stupid, and fortunately CL fixed that.  The semantics of
interpreted and compiled code are supposed to be same (except, of
course, where it wouldn't make sense, such as COMPILED-FUNCTION-P).

> Then it depends on how you define "equivalent". Identical output for any
> given input? I strongly doubt that, considering the complexity of that
> matter.

The problem of ensuring that the semantics of the output of a compiler
are the same as the input is not an easy or solved problem.  It should
be unnecessary to say this, but an interpreter is not much different.
Does the behavior of the interpreter match the specification?  It needs
to be proven, though usually it ends up simply being tested.

> > > 2. Many of my applications consist of plenty of one-time code, typically
> > >    GUI components. The file is loaded, executed, produces some side
> > >    effects (Applet layout), and is thrown away. A compilation pass would
> > >    be counter-productive.
> 
> > Many CLs have a compiler and an interpreter, if that really is a
> > concern to you. They are totally transparent to the user as well. If
> > at work I get thrown into the debugger because a GUI handler has a
> > bug, I can fix it and restart the frame. It is meaningless whether
> > this handler was compiled or not.
> 
> Then you have the double disadvantage of
> 
> - a complicated system (because it has to support the compiler)

It's not a disadvantage to the user.  Also, these days, the interpreter
is usually the compiler run quickly.  Welcome to the 21st century, where
compilers, though difficult, are not too complicated to consider.  (And
seeing how many there are for Lisp, perhaps it is not so difficult).

> - and inefficient execution (because the interpreter will run with a
>   lexical binding strategy to stay compatible with the (now unused)
>   compiler. Lexical binding is not efficient in an interpreter).

Where do you get this nonsense from?  You realize that people have known
how to implement lexical scope in an interpreter for over 30 years,
right?  It's very easy, and hardly inefficient.  It's just a matter of
storing and using the function's environment in place of the current one
when you call a function.   There is no excuse for a language with
first-class functions not to have lexical scope available anymore.
Implementing an interpreter with lexical scope is the sort of problem
you give to first-year undergraduate students, these days.

> > Simplicity is nice to have, agreed. But living without strings and
> > vectors per design...
> 
> Where do you absolutely need vectors?
> 
> Concerning strings, I agree. Pico Lisp uses Symbols for strings, if that
> is enough for you. However, it supports amost no direct access to the
> characters, only to the first one, or after you exploded them into a
> list. I feel I can live very well with that. After unpacking, you have
> the full power of Lisp for manipulating the characters.

Using symbols instead of strings is like using a hammer to type on the
keyboard.  Lispers stopped doing stuff like this 20 years ago.  Strings
and symbols have different purposes.

> > > It is all so simple if you dare to abandon the compiler. Many lisp-ish
> > > (in my opinion) programming styles like (3) you don't even consider if
> > > you are caught in monolithic lexical compiler think.
> 
> > Why should you consider?
> 
> To bring more fun to programming.

I think you have not considered the true implications of lexical scope
and closures.  I think you find that a lot of your ridiculous hacks are
unnecessary, for example, that ability to introduce small fragments of
code into the running program (for GUI or whatnot).  This is something
that can be handled efficiently and compiled with lexical closures.  The
big reason why CLers do not use EVAL is because they don't need to:
closures allow you to build functions efficiently even using information
gleaned at run-time.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npotpF3bh2tU1@uni-berlin.de>
Matthew Danish <·······@andrew.cmu.edu> wrote:
> On Mon, Aug 09, 2004 at 01:09:34PM +0000, Alexander Burger wrote:
> > Julian Stecklina <··········@web.de> wrote:
> > > Alexander Burger <···@software-lab.de> writes:
> > - and inefficient execution (because the interpreter will run with a
> >   lexical binding strategy to stay compatible with the (now unused)
> >   compiler. Lexical binding is not efficient in an interpreter).

> Where do you get this nonsense from?  You realize that people have known
> how to implement lexical scope in an interpreter for over 30 years,
> right?  It's very easy, and hardly inefficient.  It's just a matter of

Of course it can be implemented. But how fast does it run?

As I tried to explain in the papers and previous postings (obviously you
didn't read them), I hate to put stress on raw speed. But if you insist,
please show me a Lisp interpreter faster than Pico Lisp, regarding the
bare evaluation mechanism.


> Using symbols instead of strings is like using a hammer to type on the
> keyboard.  Lispers stopped doing stuff like this 20 years ago.  Strings
> and symbols have different purposes.

What I want to do is apply the rich set of lisp functions in the Lisp
language to as many data manipulations as possible. So it is just fine
to convert a symbol to a list and back. What is bad with uniformity?


> > To bring more fun to programming.

> I think you have not considered the true implications of lexical scope
> and closures.  I think you find that a lot of your ridiculous hacks are
> unnecessary, for example, that ability to introduce small fragments of
> code into the running program (for GUI or whatnot).  This is something

Please don't judge about things you don't understand.
(Read the background in http://software-lab.de/dbui.html)

If CL is so flexible, please provide a solution to

   http://software-lab.de/succ.html

then. Everybody says its soo easy, but nobody's doing it.
I'm still waiting ...

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Matthew Danish
Subject: Re: Lisp in hardware
Date: 
Message-ID: <20040809182543.GH15746@mapcar.org>
On Mon, Aug 09, 2004 at 05:00:44PM +0000, Alexander Burger wrote:
> Matthew Danish <·······@andrew.cmu.edu> wrote:
> > On Mon, Aug 09, 2004 at 01:09:34PM +0000, Alexander Burger wrote:
> > > Julian Stecklina <··········@web.de> wrote:
> > > > Alexander Burger <···@software-lab.de> writes:
> > > - and inefficient execution (because the interpreter will run with a
> > >   lexical binding strategy to stay compatible with the (now unused)
> > >   compiler. Lexical binding is not efficient in an interpreter).
> 
> > Where do you get this nonsense from?  You realize that people have known
> > how to implement lexical scope in an interpreter for over 30 years,
> > right?  It's very easy, and hardly inefficient.  It's just a matter of
> 
> Of course it can be implemented. But how fast does it run?

Just as fast as any interpreter can.  It is really a very small change
(really, it is a bug fix), but with huge implications.  See [1].

> As I tried to explain in the papers and previous postings (obviously you
> didn't read them), I hate to put stress on raw speed. But if you insist,
> please show me a Lisp interpreter faster than Pico Lisp, regarding the
> bare evaluation mechanism.

I read your previous postings, and it is that which inspired me to post.
It is quite clear from them that you have some very misguided notions
about interpreters, compilers, and lexical scoping.

> > Using symbols instead of strings is like using a hammer to type on the
> > keyboard.  Lispers stopped doing stuff like this 20 years ago.  Strings
> > and symbols have different purposes.
> 
> What I want to do is apply the rich set of lisp functions in the Lisp
> language to as many data manipulations as possible. So it is just fine
> to convert a symbol to a list and back. What is bad with uniformity?

The data-structures are fundamentally different and need to be handled
appropriately by the underlying operations.  Even if the top layer
presented to the programmer is the same, the inner workings should
reflect the differences.  This is called abstraction.  What you are
doing is exposing the implementation of strings, and forcing it to be
this one inefficient method of using symbols.

> > > To bring more fun to programming.
> 
> > I think you have not considered the true implications of lexical scope
> > and closures.  I think you find that a lot of your ridiculous hacks are
> > unnecessary, for example, that ability to introduce small fragments of
> > code into the running program (for GUI or whatnot).  This is something
> 
> Please don't judge about things you don't understand.
> (Read the background in http://software-lab.de/dbui.html)

No, I understand.  Your code confirms it.  What you are doing is using
QUOTE to delay evaluation of certain pieces of code.  This is
inefficient and unnecessary, and has been abandoned by normal Lisp
programmers for decades now.  Your mistake seems to be a lack of
understanding of the power of LAMBDA, hence your elimination of it from
the language.  If you understood LAMBDA, then you would understand that
it is possible to have fully-compiled code which nonetheless
accomplishes the same goals of flexibility that you wish to.

> If CL is so flexible, please provide a solution to
> 
>    http://software-lab.de/succ.html
> 
> then. Everybody says its soo easy, but nobody's doing it.
> I'm still waiting ...

I see two solutions contributed for CL.  However, this is missing my
point, which is a much simpler one: you do not understand LAMBDA.



[1]

;; This intepreter was written with pedagogical concerns in mind.
;; Normally, I would write it differently (for example, in
;; continuation-passing style; a very powerful technique that you 
;; cannot do easily in a dynamically scoped language, btw).

(defstruct closure params body env)

(defun lexeval (expr &optional env)
  "A very simple interpreter for a simple lexically-scoped language"
  (cond ((symbolp expr)
         (let ((binding (assoc expr env)))
           (if binding
               (cdr binding)
               (error "Unbound variable: ~s" expr))))
        ((atom expr)
         expr)
        ((eql 'lambda (first expr))
         ;; LAMBDA, ie. (lambda (x y) (+ x y))
         (make-closure :params (second expr)
                       :body (third expr)
                       :env env))
        ((eql 'let (first expr))
         ;; LET, ie. (let ((x 1) (y 2)) (+ x y))
         (lexeval (third expr)
                  (nconc (mapcar (lambda (bind)
                                   (cons (first bind)
                                         (lexeval (second bind)
                                                  env)))
                                 (second expr))
                         env)))
        ((member (first expr)
                 '(+ - * /))
         ;; a few mathematical operations, for kicks
         (apply (first expr)
                (mapcar (lambda (e) (lexeval e env))
                        (rest expr))))
        (t
         ;; function application
         (let ((f (lexeval (first expr) env))
               (args (mapcar (lambda (e) (lexeval e env))
                             (rest expr))))
           (if (and (closure-p f)
                    (= (length args)
                       (length (closure-params f))))
               (lexeval (closure-body f)
                        (nconc (mapcar #'cons
                                       (closure-params f)
                                       args)
                               ;; This line is the difference:
                               (closure-env f)
                               ;; instead of using `env', I use the
                               ;; environment of the closure.
                               ))
               (error "Invalid function call: ~s ~s"
                      f args))))))

(defun test ()
  (lexeval '(let ((f (lambda (x) (lambda (y) (+ x y)))))
             ((f 2) 3))))





P.S. Your tutorial on Pico Lisp is rather confusing.  For example, you
start using the : and :: syntax without ever explaining it.  I found out
eventually from the reference that it has to do with properties.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nq2elF3gn4qU1@uni-berlin.de>
Matthew Danish <·······@andrew.cmu.edu> wrote:
> On Mon, Aug 09, 2004 at 05:00:44PM +0000, Alexander Burger wrote:
> > Matthew Danish <·······@andrew.cmu.edu> wrote:
> > As I tried to explain in the papers and previous postings (obviously you
> > didn't read them), I hate to put stress on raw speed. But if you insist,
> > please show me a Lisp interpreter faster than Pico Lisp, regarding the
> > bare evaluation mechanism.

As I wrote in another posting, the overhead is not so much with
binding/unbinding in a lexical environment, but the runtime lookup of
variable names.


> I read your previous postings, and it is that which inspired me to post.
> It is quite clear from them that you have some very misguided notions
> about interpreters, compilers, and lexical scoping.

I see. Your problem if you think so.


> No, I understand.  Your code confirms it.  What you are doing is using
> QUOTE to delay evaluation of certain pieces of code.  This is
> inefficient and unnecessary, and has been abandoned by normal Lisp
> programmers for decades now.  Your mistake seems to be a lack of
> understanding of the power of LAMBDA,

Uh?

> hence your elimination of it from
> the language.  If you understood LAMBDA, then you would understand that

It is not eliminated. It just syntactially synonymous to 'quote'. Do as
in the faq: (setq lambda quote), then you can write:

: (mapcar (lambda (X Y) (+ X (* Y Y))) (1 2 3) (4 5 6))


> P.S. Your tutorial on Pico Lisp is rather confusing.  For example, you
> start using the : and :: syntax without ever explaining it.  I found out
> eventually from the reference that it has to do with properties.

Sorry. I assumed that it was clear from the context that ':' and '::'
are functions and thus can be looked up in the reference. I'll fix that
in the future. I'm glad for any feedback.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Matthew Danish
Subject: Re: Lisp in hardware
Date: 
Message-ID: <20040809201212.GI15746@mapcar.org>
On Mon, Aug 09, 2004 at 07:43:19PM +0000, Alexander Burger wrote:
> Matthew Danish <·······@andrew.cmu.edu> wrote:
> > On Mon, Aug 09, 2004 at 05:00:44PM +0000, Alexander Burger wrote:
> > > Matthew Danish <·······@andrew.cmu.edu> wrote:
> > > As I tried to explain in the papers and previous postings (obviously you
> > > didn't read them), I hate to put stress on raw speed. But if you insist,
> > > please show me a Lisp interpreter faster than Pico Lisp, regarding the
> > > bare evaluation mechanism.
> 
> As I wrote in another posting, the overhead is not so much with
> binding/unbinding in a lexical environment, but the runtime lookup of
> variable names.

This will always be more work for dynamic binding, because with lexical
scope the whole look-up can be eliminated and replaced with one of: a
machine register, a stack slot, or a reference to the heap depending on
the situation.

> > No, I understand.  Your code confirms it.  What you are doing is using
> > QUOTE to delay evaluation of certain pieces of code.  This is
> > inefficient and unnecessary, and has been abandoned by normal Lisp
> > programmers for decades now.  Your mistake seems to be a lack of
> > understanding of the power of LAMBDA,
> 
> Uh?
> 
> > hence your elimination of it from
> > the language.  If you understood LAMBDA, then you would understand that
> 
> It is not eliminated. It just syntactially synonymous to 'quote'. Do as
> in the faq: (setq lambda quote), then you can write:
> 
> : (mapcar (lambda (X Y) (+ X (* Y Y))) (1 2 3) (4 5 6))

Precisely why I said you do not understand LAMBDA.  LAMBDA is not
semantically synonymous to QUOTE.

* (let ((f (lambda (x) (lambda (y) (+ x y)))))
    (funcall (funcall f 2) 3))
5

what does that look like translated to Pico?  I'll try:

: (let f (quote (x) (quote (y) (+ x y)))
    ((f 2) 3))
-> NIL

Obviously these operators are not the same.  I'll try an example of this
used in a GUI to help you understand:

(defun make-gui-adder ()
  (let* ((text1 (make-text-widget))
 	 (text2 (make-text-widget))
	 (button 
	   (make-button-widget "+" 
			       :callback (lambda ()
					   (+ (parse-integer
						(value-of text1))
					      (parse-integer
						(value-of text2)))))))
    ...))

This is really pseudo-code, but it's based loosely on Tcl/Tk, for which
there is a CL library.  It probably only requires a few changes to make
it work under that.  The important bit is that this code would be fully
compiled without losing any functionality, and it does not require an
interpreter to be available at run-time.  Add special variables for when
they are needed, and you have the best of both worlds.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nr6lsF3r8lrU1@uni-berlin.de>
Matthew Danish <·······@andrew.cmu.edu> wrote:
> On Mon, Aug 09, 2004 at 07:43:19PM +0000, Alexander Burger wrote:
> > Matthew Danish <·······@andrew.cmu.edu> wrote:
> > > On Mon, Aug 09, 2004 at 05:00:44PM +0000, Alexander Burger wrote:
> > > > Matthew Danish <·······@andrew.cmu.edu> wrote:
> > > > As I tried to explain in the papers and previous postings (obviously you
> > > > didn't read them), I hate to put stress on raw speed. But if you insist,
> > > > please show me a Lisp interpreter faster than Pico Lisp, regarding the
> > > > bare evaluation mechanism.
> > 
> > As I wrote in another posting, the overhead is not so much with
> > binding/unbinding in a lexical environment, but the runtime lookup of
> > variable names.

> This will always be more work for dynamic binding, because with lexical

In shallow dynamic binding, just a single pointer dereference is
involved. In principle something like '*(symbol*)p', if 'p' is a pointer
to the symbol in memory. What can be shorter?



> Precisely why I said you do not understand LAMBDA.  LAMBDA is not
> semantically synonymous to QUOTE.

> * (let ((f (lambda (x) (lambda (y) (+ x y)))))
>     (funcall (funcall f 2) 3))
> 5

> what does that look like translated to Pico?  I'll try:

> : (let f (quote (x) (quote (y) (+ x y)))
>     ((f 2) 3))
> -> NIL

Obviously, you don't understand dynamic binding. Of course, after
calling '(f 2)' the _dynamic_ context is lost, that's natural.
(Call it a feature :-)

In Pico Lisp, you might write:

: (let f (quote (@X) (fill '((Y) (+ @X Y))))
   ((f 2) 3) )
-> 5

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Matthew Danish
Subject: Re: Lisp in hardware
Date: 
Message-ID: <20040810072155.GK15746@mapcar.org>
On Tue, Aug 10, 2004 at 06:01:34AM +0000, Alexander Burger wrote:
> Matthew Danish <·······@andrew.cmu.edu> wrote:
> > This will always be more work for dynamic binding, because with lexical
> 
> In shallow dynamic binding, just a single pointer dereference is
> involved. In principle something like '*(symbol*)p', if 'p' is a pointer
> to the symbol in memory. What can be shorter?

Well, how about a register access, which I mentioned in my previous
reply in the part of the text that you cut out.

> > Precisely why I said you do not understand LAMBDA.  LAMBDA is not
> > semantically synonymous to QUOTE.
> 
> > * (let ((f (lambda (x) (lambda (y) (+ x y)))))
> >     (funcall (funcall f 2) 3))
> > 5
> 
> > what does that look like translated to Pico?  I'll try:
> 
> > : (let f (quote (x) (quote (y) (+ x y)))
> >     ((f 2) 3))
> > -> NIL
> 
> Obviously, you don't understand dynamic binding. Of course, after
> calling '(f 2)' the _dynamic_ context is lost, that's natural.
> (Call it a feature :-)

Ahem, but that's my point.  The dynamic context is lost.  But not the
lexical context, with which this sort of thing is possible to do, and do
efficiently.

> In Pico Lisp, you might write:
> 
> : (let f (quote (@X) (fill '((Y) (+ @X Y))))
>    ((f 2) 3) )
> -> 5

This is quite amusing actually, I wonder if you realize why?

You have basically reinvented lexical scoping, but in a broken way.
Let me reacquaint you with the lambda-calculus, briefly:

  When you apply a function, you substitute the argument wherever
  the formal parameter occurs within the function; except where that
  formal parameter is shadowed by another function definition using
  the same parameter name.

  In other words, ((lambda (x) body) y) becomes body[y/x] except for
  the condition I stated above.

  Done from left-to-right, and only as necessary, it is called
  normal-order reduction, as opposed to applicative-order reduction
  which always reduces arguments first.

Normally, lexical scoping is not implemented according to this
description, because it is terribly inefficient to be performing
substitutions in source code.  Instead, a closure is created with the
environment and the function packed together.  This is the efficient
way, which can be compiled down to very simple operations and yet retain
the flexibility of the other way.

Your use of `fill' is broken because it does not respect the exception
to the substitution rule.  It uses symbols that do not have regard to
their context.  Consider:

(let f (quote (@X) (fill '((@Y) (fill '((Z) ((@X Z) @Y Z))))))
  (((f (quote (@Y) (fill '((X Z) (+ X Z @Y))))) 1) 2))

 ==>

(((quote (@Y) (fill '((Z) (((quote (@Y) (fill '((X Z) (+ X Z @Y)))) Z) @Y Z)))) 1) 2)

 ==>

((quote (Z) (((quote (1) (fill '((X Z) (+ X Z 1)))) Z) 1 Z)) 2)

 ==> Error (I didn't even have to show how the inner Y ended up with the
 wrong value!)

Abusing `fill' like this is not an appropriate way to obtain the effect
of lexical scoping.  And while it can be done correctly[1], you're still
left with a terribly inefficient and out-of-date hack.



[1]

;; I had this lying around, from an example I gave just the other day:

(defun lambda-eval (expr)
  "Evaluate the lambda-calculus using normal-order reduction
  (implemented as call-by-name).  The syntax is:
    Forms beginning with LAMBDA: (lambda VAR ...)
    Function application: (func arg)
    Ie. (lambda-eval '((lambda x (lambda y x)) (lambda z z)))"
  (cond ((atom expr)
	 (error "unbound variable ~A" expr))
	((eql (first expr) 'lambda)
	 expr)
	(t
	 (let ((f (lambda-eval (first expr))))
	   ;; left-most reduction
	   ;; eval will always return a function (see base case)
	   (assert (eql 'lambda (first f)))
	   ;; single parameter functions only
	   ;; call-by-name
	   (lambda-eval (sub (second expr) (second f) (third f)))))))
	   
(defun sub (new old expr)
  "Substitute new value for old variable, except where shadowed"
  (cond ((atom expr)
	 (if (eql expr old)
	   new
	   old))
	((eql 'lambda (first expr))
	 (if (eql (second expr) old)
	   ;; shadowed
	   expr
	   (list 'lambda (second expr) (sub new old (third expr)))))
	(t
	 (list (sub new old (first expr))
	       (sub new old (second expr))))))


-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nrk5nF3vjldU1@uni-berlin.de>
Matthew Danish <·······@andrew.cmu.edu> wrote:
> On Tue, Aug 10, 2004 at 06:01:34AM +0000, Alexander Burger wrote:
> > Matthew Danish <·······@andrew.cmu.edu> wrote:
> > > This will always be more work for dynamic binding, because with lexical
> > 
> > In shallow dynamic binding, just a single pointer dereference is
> > involved. In principle something like '*(symbol*)p', if 'p' is a pointer
> > to the symbol in memory. What can be shorter?

> Well, how about a register access, which I mentioned in my previous
> reply in the part of the text that you cut out.

You have to get the symbol's value into the register. Or do you keep all
symbols in registers?

Remember, we are talking about the interpreter. The interpreter operates
on s-expression, and s-expressions consist of cells, symbols and
numbers. Nested pointer structures. No stack or register in sight here.

All symbols are uniform. On the physical level, there is no distinction
between local, free and global variables. No need for "special"
variables and other tricks. The distinction between local, free and
global variables is decided dynamically (and in the head of the
programmer).



> > Obviously, you don't understand dynamic binding. Of course, after
> > calling '(f 2)' the _dynamic_ context is lost, that's natural.
> > (Call it a feature :-)

> Ahem, but that's my point.  The dynamic context is lost.  But not the
> lexical context, with which this sort of thing is possible to do, and do
> efficiently.

You are still caught in single-minded lexical thinking.

"lexical" is what things _look_ like (in the source code), but dynamic
is what they _are_ (physically, inside the machine). And the latter is
what really counts.

Nothing has to be done, no environment has to be carried around. That's
why dynamic binding is so efficient.

Pico Lisp tries to be as dynamic as possible. This is a good feature and
includes memory management, symbol binding, and late method binding.
Everything is just as it is. Nothing is going on behind the scenes, and
the programmer has full control. If you understand the simple dynamic
mechanisms, you can create any solution you desire (though that solution
might not satisfy your dogmatic terminology).


> > In Pico Lisp, you might write:
> > 
> > : (let f (quote (@X) (fill '((Y) (+ @X Y))))
> >    ((f 2) 3) )
> > -> 5

> This is quite amusing actually, I wonder if you realize why?
> You have basically reinvented lexical scoping, but in a broken way.

You may call it as you like. It is the most natural thing. If you ask
for a function 'f' returning Y+3, I give it to you. No need to make a
great fuss.


> Let me reacquaint you with the lambda-calculus, briefly:
> ...

Thank you for going through the trouble. Fine.

> Your use of `fill' is broken because it does not respect the exception
> to the substitution rule.  It uses symbols that do not have regard to
> their context.

Again, the lexical context may or may not be of interest.

I want to decide that by myself, not cling religiously to rules. Pico
Lisp gives me the freedom to manipulate data structures in any
conceivable way.

Most other languages impose too may restrictions on the programmer.
Notable exceptions are Forth and Assembly language, but both are more
tedious to program in.

Cheers,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Matthew Danish
Subject: Re: Lisp in hardware
Date: 
Message-ID: <20040810170950.GL15746@mapcar.org>
On Tue, Aug 10, 2004 at 09:51:53AM +0000, Alexander Burger wrote:
> Matthew Danish <·······@andrew.cmu.edu> wrote:
> > On Tue, Aug 10, 2004 at 06:01:34AM +0000, Alexander Burger wrote:
> > > Matthew Danish <·······@andrew.cmu.edu> wrote:
> > > > This will always be more work for dynamic binding, because with lexical
> > > 
> > > In shallow dynamic binding, just a single pointer dereference is
> > > involved. In principle something like '*(symbol*)p', if 'p' is a pointer
> > > to the symbol in memory. What can be shorter?
> 
> > Well, how about a register access, which I mentioned in my previous
> > reply in the part of the text that you cut out.
> 
> You have to get the symbol's value into the register. Or do you keep all
> symbols in registers?

With lexical scope, symbols do not hold values.  They merely name
variables.  Every compiler has a stage called `register allocation'
which computes an interference graph and uses it to allocate variables
into different classes of storage for different segments of code.  By
the time that this is done, there is no relationship left between the
symbol and the variable.  But even in an interpreter, the symbol is only
used to name the variable in an environment object; so you can easily
switch environments around.

You cannot do any of this when your variables are tied to symbols,
therefore your "dynamic scope" is never going to be as efficient as
lexical scope.  "Dynamic scope" can be a useful thing to have, but
lexical scope is far easier for the programmer to understand because it
makes it much easier to locate definitions and uses.  Fortunately, it's
possible to have both co-exist.

> > > Obviously, you don't understand dynamic binding. Of course, after
> > > calling '(f 2)' the _dynamic_ context is lost, that's natural.
> > > (Call it a feature :-)
> 
> > Ahem, but that's my point.  The dynamic context is lost.  But not the
> > lexical context, with which this sort of thing is possible to do, and do
> > efficiently.
> 
> You are still caught in single-minded lexical thinking.
> 
> "lexical" is what things _look_ like (in the source code), but dynamic
> is what they _are_ (physically, inside the machine). And the latter is
> what really counts.

Wrong, what's inside of the machine is whatever you want.  I arrange for
it to match the lexical context, that is no problem.  It's more
important to make it easier for the programmer to understand, the whole
point of high-level languages is that it doesn't matter what the machine
internals have to do.

> Nothing has to be done, no environment has to be carried around. That's
> why dynamic binding is so efficient.

But you absolutely have to dereference at least one pointer, because you
cannot separate the variables from symbols.

> Pico Lisp tries to be as dynamic as possible. This is a good feature and
> includes memory management, symbol binding, and late method binding.
> Everything is just as it is. Nothing is going on behind the scenes, and
> the programmer has full control. If you understand the simple dynamic
> mechanisms, you can create any solution you desire (though that solution
> might not satisfy your dogmatic terminology).

It has nothing to do with terminology and everything to do with
correctness.  Simulating lexical scope the way you did is not correct,
and it will bite you or one of your users in the ass eventually with
mysterious bugs.

There are very good reasons why Common Lisp switched to lexical scope by
default.  Have you considered that the entire Lisp community made a huge
break with the past by doing so?  We still use CAR and CDR, but gave up
"dynamic scope" as a default!

> > > In Pico Lisp, you might write:
> > > 
> > > : (let f (quote (@X) (fill '((Y) (+ @X Y))))
> > >    ((f 2) 3) )
> > > -> 5
> 
> > This is quite amusing actually, I wonder if you realize why?
> > You have basically reinvented lexical scoping, but in a broken way.
> 
> You may call it as you like. It is the most natural thing. If you ask
> for a function 'f' returning Y+3, I give it to you. No need to make a
> great fuss.

Please, you must understand.  Re-read my posting.  YOUR CODE IS BROKEN.
It will not work for more interesting programs, but instead cause
horrible and hard to find bugs.

> > Let me reacquaint you with the lambda-calculus, briefly:
> > ...
> 
> Thank you for going through the trouble. Fine.
> 
> > Your use of `fill' is broken because it does not respect the exception
> > to the substitution rule.  It uses symbols that do not have regard to
> > their context.
> 
> Again, the lexical context may or may not be of interest.

It has nothing to do with your interest.  Your code is buggy and wrong.

> I want to decide that by myself, not cling religiously to rules. Pico
> Lisp gives me the freedom to manipulate data structures in any
> conceivable way.
> 
> Most other languages impose too may restrictions on the programmer.
> Notable exceptions are Forth and Assembly language, but both are more
> tedious to program in.

You think of lexical scope as a restriction.  I think of it as a great
tool to create some amazingly powerful constructs.  You cannot get the
same expressivity without this tool, and hacking a half-assed buggy
semi-implementation of it is not going to help.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nu0kbF4jr9sU1@uni-berlin.de>
Matthew Danish <·······@andrew.cmu.edu> wrote:
> symbol and the variable.  But even in an interpreter, the symbol is only
> used to name the variable in an environment object; so you can easily
> switch environments around.

Exactly that's the point. Here is the overhead. Please see my separate
posting concerning interpreters.


> > Nothing has to be done, no environment has to be carried around. That's
> > why dynamic binding is so efficient.
> 
> But you absolutely have to dereference at least one pointer, because you
> cannot separate the variables from symbols.

That's what I said. This is the minimum, of course. But for a lexical
interpreter, you need some variable _lookup_.


> You think of lexical scope as a restriction.  I think of it as a great

No, it is not a restriction. A complicated total system is.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: ·········@random-state.net
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfctn4$5iboc$1@midnight.cs.hut.fi>
Alexander Burger <···@software-lab.de> wrote:

> That's what I said. This is the minimum, of course. But for a lexical
> interpreter, you need some variable _lookup_.

No -- that is exactly what you don't need. The closure can have direct
access to the store. Eg, if the closed-over veriables are in a vector,
then any reference to the such a variable can use a pre-computed offset in
to the vector. Or if you reserve a cons-cell for each variable, lexical
reference becomes just a dereference. And if the code is compiled, it may
be possible to allocate a register for the variable, like others have
pointed out.

In contrast, with shallow binding special variables you need the
dereference, AND you need to undo the binding when leaving the contour.
With deep binding you need lookup.

> No, it is not a restriction. A complicated total system is.

I think this may be the core of the contention. Most people don't mind
complicated systems as long as they work. Also, it seems to me that the
consensus agrees that it's OK for something the be hard to implement if
the payoff is worth it. Hence, we have lexical scope and compilers -- and
we like 'em.

Cheers,

 -- Nikodemus                   "Not as clumsy or random as a C++ or Java. 
                             An elegant weapon for a more civilized time."
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nueneF4qvj7U1@uni-berlin.de>
·········@random-state.net wrote:
> Alexander Burger <···@software-lab.de> wrote:

> > That's what I said. This is the minimum, of course. But for a lexical
> > interpreter, you need some variable _lookup_.

> No -- that is exactly what you don't need. The closure can have direct
> access to the store. Eg, if the closed-over veriables are in a vector,
> then any reference to the such a variable can use a pre-computed offset in
> to the vector.

How can you pre-compute anything in an interpreter?

When the interpreter 'sees' a variable (a pointer to a symbol
structure), it must find its current value: Look it up in an
environment.


> Or if you reserve a cons-cell for each variable, lexical
> reference becomes just a dereference.

This is the _dynamic_ behavior. Exactly my point. In a _lexical_
environment this doesn't work.


> And if the code is compiled, it may
> be possible to allocate a register for the variable, like others have
> pointed out.

Yes, sure. That's what I'm repeating all the time. Lexical binding is
superior to implement in a compiler. However, it is tedious and
ineffecive in an interpreter.

The additional disadvantage is that a purely lexical system is not very
useful. You have to support _both_ stragegies, e.g. via "special"
variables. That's an additional interpretation overhead.

Please check my other post about "interpreters", and provide a solution
to B.2 without the need for lookup.


> > No, it is not a restriction. A complicated total system is.

> I think this may be the core of the contention. Most people don't mind
> complicated systems as long as they work. Also, it seems to me that the
> consensus agrees that it's OK for something the be hard to implement if
> the payoff is worth it. Hence, we have lexical scope and compilers -- and
> we like 'em.

Then I strongly recommend to use them. I didn't say you have to switch.

> Cheers,

>  -- Nikodemus                   "Not as clumsy or random as a C++ or Java. 
>                              An elegant weapon for a more civilized time."

Regards,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: ·········@random-state.net
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfd22n$5iboc$2@midnight.cs.hut.fi>
Alexander Burger <···@software-lab.de> wrote:

> How can you pre-compute anything in an interpreter?

See my earlier post. Interpreters are a multitude. In an "evalutor style
interpreter", you don't. In pretty much anything else you do: eval =
analyze + exec. Though even in an evalutor there you could with a bit of
cleverness cache critical information so that next time the expression is
evaluated no lookup is needed.

Evalutors just aren't particularly interesting, imo, epitomizing the
simplicitly you so much desire, but simultaneuously being the slowest
possible way to go about things (ok: for simple expressions typed at a
repl evalutor is probably faster, but for bits of code being executed
repeatedly -- like functions -- they just suck).

>> Or if you reserve a cons-cell for each variable, lexical
>> reference becomes just a dereference.

> This is the _dynamic_ behavior. Exactly my point. In a _lexical_
> environment this doesn't work.

It does. Sorry, but it does. It's a cell per variable in an environment:
if variable X exists in three nested lexical environments, with each a
separate binding, and Y has only a binding at the outermost level, then
you have three cells for X, and one for Y.

It works. Perfectly. Vector storage is neater, though, and takes less
space.

> Please check my other post about "interpreters", and provide a solution
> to B.2 without the need for lookup.

I've better things to do then solve your challenges, and then hear you say
"but that that's not interpretation!".

Cheers,

 -- Nikodemus                   "Not as clumsy or random as a C++ or Java. 
                             An elegant weapon for a more civilized time."
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2numssF4upu1U1@uni-berlin.de>
·········@random-state.net wrote:
> Alexander Burger <···@software-lab.de> wrote:

> I've better things to do then solve your challenges, and then hear you say
> "but that that's not interpretation!".

Exactly :-)

The discussion about interpretation in this tread was about a virtual
machine operating on _s-expressions_.

What you are talking about is some type of compilation, though not to
machine code, but to some other representation. Converting one
representation of a program (s-expressions) to some other (machine code,
byte codes or also another s-expression) is a compilation process.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Duane Rettig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <4d61xvfa8.fsf@franz.com>
Alexander Burger <···@software-lab.de> writes:

> ·········@random-state.net wrote:
> > Alexander Burger <···@software-lab.de> wrote:
> 
> > I've better things to do then solve your challenges, and then hear you say
> > "but that that's not interpretation!".
> 
> Exactly :-)
> 
> The discussion about interpretation in this tread was about a virtual
> machine operating on _s-expressions_.

That's not what you've been discussing.  You've been discussing operations
on internalized forms of s-expressions.  Ascii-art boxes representing
memory. Translated from s-expressions.  Pre-compiled, if you will.

How do you distnguish s-expressions internally from m-expressions?
I submit that they are indistinguishable, since they have been
translated (compiled, in your broad sense of the word) into internal
machine representations.

> What you are talking about is some type of compilation, though not to
> machine code, but to some other representation. Converting one
> representation of a program (s-expressions) to some other (machine code,
> byte codes or also another s-expression) is a compilation process.

Yes; and therefore, Pico Lisp is compiled at a level that you had
not intended!

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nv6ndF4t7sbU2@uni-berlin.de>
Duane Rettig <·····@franz.com> wrote:
> Alexander Burger <···@software-lab.de> writes:
> That's not what you've been discussing.  You've been discussing operations
> on internalized forms of s-expressions.  Ascii-art boxes representing
> memory. Translated from s-expressions.  Pre-compiled, if you will.

All the same on a certain abstraction level.
Come on! Don't be picky, you know what I mean. 

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Duane Rettig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <48yclv3rw.fsf@franz.com>
Alexander Burger <···@software-lab.de> writes:

> Duane Rettig <·····@franz.com> wrote:
> > Alexander Burger <···@software-lab.de> writes:
> > That's not what you've been discussing.  You've been discussing operations
> > on internalized forms of s-expressions.  Ascii-art boxes representing
> > memory. Translated from s-expressions.  Pre-compiled, if you will.
> 
> All the same on a certain abstraction level.
> Come on! Don't be picky, you know what I mean. 

It is precisely your misunderstanding of other people's assumptions
that are causing you to have so many disagreements in this thread.
It is your assumption that we all know what you mean, and what your
point of view is, which has you frustrated with our not coming to
the same obvious conclusions that you draw.

No one can know what another person intends to mean; only what he says.
Yes, I'm being picky - it's the only way to make sense out of what
you _say_.  I can't make any other assumptions.  If my conclusions
are wrong, then don't tell me what I know and what I don't know.
Instead, correct my mistaken assumptions about what you are saying.

You have been shown by several here how your definition of compilation
doesn't make sense.  If you want to enlighten us, tell us what your
definition of compilation actually is, and then we can discuss the
issue at the abstraction level you desire.  From the gestalt of
your writings, your usage of the word compilation is not consistent.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o10u8F5auh1U3@uni-berlin.de>
Duane Rettig <·····@franz.com> wrote:
> You have been shown by several here how your definition of compilation
> doesn't make sense.  If you want to enlighten us, tell us what your
> definition of compilation actually is, and then we can discuss the

I tried to show a semi-formal definition using pseudo code in another
posting. I don't repeat it here.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Paul Dietz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <411A7292.B380C652@motorola.com>
Alexander Burger wrote:

> All the same on a certain abstraction level.
> Come on! Don't be picky, you know what I mean.

Yes.  Your abstraction level is good, our abstract level is bad.

Sheesh.

	Paul
From: Marcus Breiing
Subject: Re: Lisp in hardware
Date: 
Message-ID: <CO9SMW5mPZh@breiing.com>
* Alexander Burger

[closures and efficient lexical variables]

> How can you pre-compute anything in an interpreter?

You don't have to pre-compute (+ 1 2) to have it return 3 at run time,
and you don't have to pre-compute (lambda (x) (* x x)) to have it
return a closure at run time.

Maybe there's something about how you create, pass around and use code
fragments in Pico Lisp applications that would be impossible to do
with lexical closures?

Marcus

-- 
Marcus Breiing    http://breiing.com/postings/GoOmdntW2BM
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nunn3F4r9hnU1@uni-berlin.de>
Marcus Breiing <···············@breiing.com> wrote:
> * Alexander Burger

> [closures and efficient lexical variables]

> > How can you pre-compute anything in an interpreter?

> You don't have to pre-compute (+ 1 2) to have it return 3 at run time,
> and you don't have to pre-compute (lambda (x) (* x x)) to have it
> return a closure at run time.

Hm, I'm not sure if I understand you. The discussion was about a
pre-computed index into an environment.

> Maybe there's something about how you create, pass around and use code
> fragments in Pico Lisp applications that would be impossible to do
> with lexical closures?

Not concerning _this_ discussion. Of course, from a strictly lexical
scope you cannot access free and global variables, but that's another
issue.

When I talked about those code fragments, the statement was just that it
is not very meaningful to compile all those small fragments which are
mostly never used. Oh, confusing ;-) Too many different subjects at the
same time ... What went wrong? I did not start this discussion, but
found myself suddenly having to defend several diverging opinions.


> Marcus Breiing    http://breiing.com/postings/GoOmdntW2BM

Cheers,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Edi Weitz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87657pycqz.fsf@bird.agharta.de>
On 11 Aug 2004 14:10:45 GMT, Alexander Burger <···@software-lab.de> wrote:

> I did not start this discussion, but found myself suddenly having to
> defend several diverging opinions.

The easiest way to avoid this is not to have so many diverging
opinions... :)

-- 

"Lisp doesn't look any deader than usual to me."
(David Thornley, reply to a question older than most languages)

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Marcus Breiing
Subject: Re: Lisp in hardware
Date: 
Message-ID: <SCfueZFMa5e@breiing.com>
* Alexander Burger

>> > How can you pre-compute anything in an interpreter?

>> You don't have to pre-compute (+ 1 2) to have it return 3 at run time,
>> and you don't have to pre-compute (lambda (x) (* x x)) to have it
>> return a closure at run time.

> Hm, I'm not sure if I understand you. The discussion was about a
> pre-computed index into an environment.

You have to "pre-compute" a LAMBDA just as much (or rather, just as
little) as a + or LET or MAPCAR: You have an operator that operates on
its arguments (possibly un-evaluated, possibly including the
environment) and returns a result, and it needs to do its work no
earlier than the moment it is called.

Just returning the source s-expressions doesn't cut it for lexical
LAMBDA, because the closure object has to hold the captured lexical
environment.  So this LAMBDA operator implements an actual
transformation, not just an alternate spelling of QUOTE.

Now, my point was: Additionally replacing symbolic references to
lexical variables with index-based references is more of an increment
in the complexity of the LAMBDA *operator* than a fundamental change
in how the *interpreter* works.  Thus, your question:

>> > How can you pre-compute anything in an interpreter?

doesn't seem to apply.  Also, since a lexical closure is not identical
to its source code anyway, invocation of the "code is data" principle
shouldn't apply either.  This is why I asked whether Pico Lisp
applications would have problems with lexical closures apart from
worries over efficiency and "code is data."

Marcus
From: Joerg Hoehle
Subject: Re: Lisp in hardware
Date: 
Message-ID: <uoelg2tmn.fsf@users.sourceforge.net>
Matthew Danish <·······@andrew.cmu.edu> writes:
> Have you considered that the entire Lisp community made a huge
> break with the past by doing so?  We still use CAR and CDR, but gave up
> "dynamic scope" as a default!

> Please, you must understand.  Re-read my posting.  YOUR CODE IS BROKEN.
> It will not work for more interesting programs, but instead cause
> horrible and hard to find bugs.

This remembers me of some comments I came across in some larger
Emacs-Lisp package (was it the original ange-ftp by Andy Norman?),
where the author sweared at Emacs' lack of lexical scoping. The
comment said something to the effect of "the following (40 lines of
code) is clumsy. It would have been trivial had Emacs provided lexical
scoping".

So far for succintness ("shortness" or "power"), Alexander's holy grail.
http://software-lab.de/succ.html

Alexander Burger writes:
> You don't need any "clever macrology" to get any desired behavior in
> Pico Lisp, including lexical scoping.

Note how Emacs-Lisp has *not* made the switch to lexical scoping like
Common Lisp did.  Nevertheless, there's a loads of interesting Emacs
packages -- you only shows one can go very far with dynamic
scoping. That's maybe what counts for Alexander: he can go very far
with Pico Lisp, and doesn't (yet) want to listen to enlighted people
from Common Lisp, Scheme, or Algol.

Remembers me also of comments from Guido van Rossum (of Python fame),
at the time Python 1.4 or 1.5 was current: "what do we need a MOP?
__getitem() etc. is all what you ever need."
Time passed by, many people gave arguments, now Guido understands
the issues and Python 2.x has meta-classes.

I find that history repeats itself. The root problem maybe is that
simple things work with simple examples.

Regards,
	Jorg Hohle
Telekom/T-Systems Technology Center
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o346hF670ooU1@uni-berlin.de>
Joerg Hoehle <······@users.sourceforge.net> wrote:

> So far for succintness ("shortness" or "power"), Alexander's holy grail.
> http://software-lab.de/succ.html

That ist not about succinctness of the language! It is about the
succinctness of the _application_programming_ framework, resulting
from the power and flexibility of the underlying language.

I believe you can build such a framework only if the underlying language
(the system programmming language; Pico Lisp in that case) is flexible
enough, from the higher down to the lowest levels.

>         Jorg Hohle
> Telekom/T-Systems Technology Center

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86smap1zbc.fsf@goldenaxe.localnet>
Joerg Hoehle <······@users.sourceforge.net> writes:

> This remembers me of some comments I came across in some larger
> Emacs-Lisp package (was it the original ange-ftp by Andy Norman?),
> where the author sweared at Emacs' lack of lexical scoping. The
> comment said something to the effect of "the following (40 lines of
> code) is clumsy. It would have been trivial had Emacs provided lexical
> scoping".

From the XEmacs that I am writing this reply with:

`lexical-let' is a compiled Lisp macro
  -- loaded from "cl-macs"
(lexical-let BINDINGS &rest BODY)

Documentation:
(lexical-let BINDINGS BODY...): like `let', but lexically scoped.
The main visible difference is that lambdas inside BODY will create
lexical closures as in Common Lisp.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Vassil Nikolov
Subject: Re: Lisp in hardware
Date: 
Message-ID: <lzbrhdr7pf.fsf@janus.vassil.nikolov.names>
Julian Stecklina <··········@web.de> writes:

> Joerg Hoehle <······@users.sourceforge.net> writes:
>
>> This remembers me of some comments I came across in some larger
>> Emacs-Lisp package (was it the original ange-ftp by Andy Norman?),
>> where the author sweared at Emacs' lack of lexical scoping. The
>> comment said something to the effect of "the following (40 lines of
>> code) is clumsy. It would have been trivial had Emacs provided lexical
>> scoping".
>
> From the XEmacs that I am writing this reply with:
>
> `lexical-let' is a compiled Lisp macro
>   -- loaded from "cl-macs"
> (lexical-let BINDINGS &rest BODY)
>
> Documentation:
> (lexical-let BINDINGS BODY...): like `let', but lexically scoped.
> The main visible difference is that lambdas inside BODY will create
> lexical closures as in Common Lisp.


  Unfortunately, these things are hard to do right when the language
  implementation does not support them---here is just one trivial
  example:

  (lexical-let ((x 0)) x)
  => 0

  (progv '(x) '(1) x)
  => 1

  (lexical-let ((x 0)) (progv '(x) '(1) x))
  => 0

  This sheds some light why:

  (cl-prettyprint (macroexpand '(lexical-let ((x 0)) (progv '(x) '(1) x))))
  (let ((--x--11508 0))
    (let ((cl-progv-save nil))
      (unwind-protect
  	(progn
  	  (cl-progv-before '(x) '(1))
  	  (symbol-value '--x--11508))
        (cl-progv-after))))

  Let the (cl-macs) programmer beware...

  ---Vassil.

-- 
Vassil Nikolov <········@poboxes.com>

Hollerith's Law of Docstrings: Everything can be summarized in 72 bytes.
From: Vassil Nikolov
Subject: Re: Lisp in hardware
Date: 
Message-ID: <lz7js1r7a7.fsf@janus.vassil.nikolov.names>
Vassil Nikolov <········@poboxes.com> writes:

> Julian Stecklina <··········@web.de> writes:
>
>> Joerg Hoehle <······@users.sourceforge.net> writes:
>>
>>> This remembers me of some comments I came across in some larger
>>> Emacs-Lisp package (was it the original ange-ftp by Andy Norman?),
>>> where the author sweared at Emacs' lack of lexical scoping. The
>>> comment said something to the effect of "the following (40 lines of
>>> code) is clumsy. It would have been trivial had Emacs provided lexical
>>> scoping".
>>
>> From the XEmacs that I am writing this reply with:
>>
>> `lexical-let' is a compiled Lisp macro
>>   -- loaded from "cl-macs"
>> (lexical-let BINDINGS &rest BODY)
>>
>> Documentation:
>> (lexical-let BINDINGS BODY...): like `let', but lexically scoped.
>> The main visible difference is that lambdas inside BODY will create
>> lexical closures as in Common Lisp.
>
>
>   Unfortunately, these things are hard to do right when the language
>   implementation does not support them---here is just one trivial
>   example:
>
>   (lexical-let ((x 0)) x)
>   => 0
>
>   (progv '(x) '(1) x)
>   => 1
>
>   (lexical-let ((x 0)) (progv '(x) '(1) x))
>   => 0


  And I obviously forgot to mention that to run the equivalent example
  in Common Lisp, one would have to declare X special in the body of
  the PROGV (besides using LET instead of lexical-let).

  ---Vassil.

-- 
Vassil Nikolov <········@poboxes.com>

Hollerith's Law of Docstrings: Everything can be summarized in 72 bytes.
From: Kalle Olavi Niemitalo
Subject: scope in Emacs Lisp (was: Lisp in hardware)
Date: 
Message-ID: <878ychwj7d.fsf_-_@Astalo.kon.iki.fi>
Vassil Nikolov <········@poboxes.com> writes:

>   (lexical-let ((x 0)) (progv '(x) '(1) x))
>   => 0

I don't think that's wrong.  Emacs Lisp and Common Lisp do have
different semantics for variables, but it is often possible to
convert between them.  There are some quirks though.

Emacs Lisp: (lexical-let ((x 0)) (progv '(x) '(1) x))
Common Lisp: (let ((x 0)) (progv '(x) '(1) x))
result: 0
comments: The last x refers to the lexical variable and is not
related to the symbol-value bound with progv.

Common Lisp: (let ((x 0))
               (progv '(x) '(1)
                 (locally (declare (special x)) x)))
Emacs Lisp: (lexical-let ((x 0))
              (progv '(x) '(1)
                (symbol-value 'x)))
result: 1
comments: Emacs Lisp does not support special declarations, but
the symbol-value function can be used instead.

Emacs Lisp: (lexical-let ((x 0))
              (labels ((get () x))
                (let ((x 1))
                  (get))))
Common Lisp: (let ((x 0))
               (flet ((get () x))
                 (let ((#1=#.(gensym "X") x))
                   (unwind-protect
                       (progn (setq x 1)
                              (get))
                     (setq x #1#)))))
result: 1
comments: Common Lisp does not support dynamic bindings of
lexical variables, though unwind-protect can simulate them on
single-threaded systems.

Common Lisp: (let ((x 0))
               (flet ((get () x))
                 (let ((x 1)) (declare (special x))
                   (get))))
Emacs Lisp: (lexical-let ((x 0))
              (labels ((get () x))
                (letf (((symbol-value 'x) 1))
                  (get))))
result: 0
From: Barry Margolin
Subject: Re: scope in Emacs Lisp (was: Lisp in hardware)
Date: 
Message-ID: <barmar-3F3012.20294814082004@comcast.dca.giganews.com>
In article <·················@Astalo.kon.iki.fi>,
 Kalle Olavi Niemitalo <···@iki.fi> wrote:

> Vassil Nikolov <········@poboxes.com> writes:
> 
> >   (lexical-let ((x 0)) (progv '(x) '(1) x))
> >   => 0
> 
> I don't think that's wrong.  Emacs Lisp and Common Lisp do have
> different semantics for variables, but it is often possible to
> convert between them.  There are some quirks though.
> 
> Emacs Lisp: (lexical-let ((x 0)) (progv '(x) '(1) x))
> Common Lisp: (let ((x 0)) (progv '(x) '(1) x))
> result: 0
> comments: The last x refers to the lexical variable and is not
> related to the symbol-value bound with progv.
> 
> Common Lisp: (let ((x 0))
>                (progv '(x) '(1)
>                  (locally (declare (special x)) x)))
> Emacs Lisp: (lexical-let ((x 0))
>               (progv '(x) '(1)
>                 (symbol-value 'x)))
> result: 1
> comments: Emacs Lisp does not support special declarations, but
> the symbol-value function can be used instead.

You can also use symbol-value in CL.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Vassil Nikolov
Subject: Re: scope in Emacs Lisp
Date: 
Message-ID: <lzu0v239ow.fsf@janus.vassil.nikolov.names>
Kalle Olavi Niemitalo <···@iki.fi> writes:

> Vassil Nikolov <········@poboxes.com> writes:
>
>>   (lexical-let ((x 0)) (progv '(x) '(1) x))
>>   => 0
>
> I don't think that's wrong.  Emacs Lisp and Common Lisp do have
> different semantics for variables, but it is often possible to
> convert between them.  There are some quirks though.


  It's not wrong, but I don't think it is quite right either.  What bugs
  me is that the way lexical-let works is neither the Common Lisp way
  (which, roughly speaking, makes all variable references lexical,
  unless declared/proclaimed special), nor the Elisp way (which, roughly
  speaking, makes all variable references special).  Hence the quirks,
  of which the programmer must beware...  I suppose I wouldn't have made
  this point if the macro had been called differently (lexical-var?
  letsubst?).


> [...]
> Common Lisp: (let ((x 0))
>                (progv '(x) '(1)
>                  (locally (declare (special x)) x)))
> Emacs Lisp: (lexical-let ((x 0))
>               (progv '(x) '(1)
>                 (symbol-value 'x)))
> result: 1
> comments: Emacs Lisp does not support special declarations, but
> the symbol-value function can be used instead.


  Yes, of course, but I don't want to refer to variables this way,
  like I wouldn't want a macro character that introduces a variable
  reference (such as $x being read as (symbol-value 'x)).


  ---Vassil.


  P.S. Sorry about the delay in responding: it took quite a while for
  the message to reach the news server I am using.  Probably need to
  look for another server...


-- 
Vassil Nikolov <········@poboxes.com>

Hollerith's Law of Docstrings: Everything can be summarized in 72 bytes.
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86isbqocvy.fsf@web.de>
Alexander Burger <···@software-lab.de> writes:

> Nothing has to be done, no environment has to be carried around. That's
> why dynamic binding is so efficient.

It is part of the function object. It does not have to be "carried
around" ...

> I want to decide that by myself, not cling religiously to rules. Pico
> Lisp gives me the freedom to manipulate data structures in any
> conceivable way.

You actually do cling religously to rules.

> Most other languages impose too may restrictions on the programmer.
> Notable exceptions are Forth and Assembly language, but both are more
> tedious to program in.

CL has dynamic and lexical scoping. And with some clever macrology you
could even make dynamic scoping the default. PicoLisp only has dynamic
scoping and real lexical scoping is not in sight. Who is more
restricted?

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nu0n2F4jr9sU2@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> Alexander Burger <···@software-lab.de> writes:

> > Nothing has to be done, no environment has to be carried around. That's
> > why dynamic binding is so efficient.

> It is part of the function object. It does not have to be "carried
> around" ...

Sigh. See my separate posting concerning interpreters.


> CL has dynamic and lexical scoping. And with some clever macrology you
> could even make dynamic scoping the default. PicoLisp only has dynamic
> scoping and real lexical scoping is not in sight. Who is more
> restricted?

You don't need any "clever macrology" to get any desired behavior in
Pico Lisp, including lexical scoping.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Alexander Burger
Subject: Interpreters (was: Lisp in hardware)
Date: 
Message-ID: <2nu1blF4k2b6U1@uni-berlin.de>
I'm quite surprised how little some people who posted to this thread
seem to understand about the inner workings of an interpreter.

They keep on talking about the efficiency of lexical binding, which is
true for compiled code, but which is not given for an interpreter. I'm
not going to discuss the benefits of either binding strategy here, and
also not whether an interpreter is still needed in Lisp or not, but I
have to defend a previous claim that one important reason why Lisp
switched to lexical binding was its relative inefficiency in the
interpreter.


Ok, I'll try to present some code in a kind of pseudo-assembly language.
The examples are a bit simplified, they omit, for example, the
bookkeeping required for the garbage collector.


I consider three cases, compiler/lexical, interpreter/dynamic and
interpreter/lexical. We have to look at three places:

1. The setup of the binding environment at a function's beginning

2. The access to the value of a bound variable when the function
   is running

3. And the cleanup of the environment (unbinding)


I assume a function with two formal parameters 'X' and 'Y'.

   (lambda (X Y) ...  X ... Y ...)

The evaluated arguments are in some locations (e.g. registers) 'arg1'
and 'arg2'.



A. ### compiler/lexical:

A.1. Set up lexical environment on function entry
   sub   sp,8           # Allocate space on the stack for X and Y
   mov   arg1,4(sp)     # Store the first evaluated argument
   mov   arg2,0(sp)     # and the second

A.2. Access the value of 'X'
   ...
   mov   4(sp),r1       # Get value of 'X' into register 1
   ...

A.3. Cleanup
   add   sp,8           # Restore stack pointer
   ret                  # Return

This is extremely short and fast. Can't be beaten.



How does an interpreter execute the above function?

Let's assume we have a pointer to the function

   ((X Y) (+ X Y))

in 'r1'. That is, the CAR is the formal parameter list, and the CDR is
the body.



In the case of interpreter/dynamic we have the current value of each
symbol stored in its value cell. For efficiency, this is the first
memory location in the symbol structure, i.e. where the symbol pointer
points to.


B. ### interpreter/dynamic:

B.1. Dynamically bind arguments on function entry
   mov   (r1)+,r2       # Get the CAR (the parameter list) into r2
   mov   (r1),r1        # Point r1 to CDR of function (the body)
   mov   (r2)+,r3       # Get the pointer to 'X' into r3
   mov   (r2),r2        # r2 on CDR of parameter list
   push  (r3)           # Save the old value of 'X'
   push  r3             # and the pointer to 'X'
   mov   arg1,(r3)      # Bind 'X' to the new value (arg1)
   mov   (r2)+,r3       # Get the pointer to 'Y' into r3
   push  (r3)           # Save the old value of 'Y'
   push  r3             # and the pointer to 'Y'
   mov   arg2,(r3)      # Bind 'X' to the new value (arg2)

B.2. Access the value of 'X'. Assume a pointer to 'X' in 'r1'
   ...
   mov   (r1),r2        # Get the value of 'X' into register 2
   ...

B.3. Cleanup
   pop   r3             # Retrieve the pointer to 'Y'
   pop   (r3)           # Restore old value of 'Y'
   pop   r3             # Retrieve the pointer to 'X'
   pop   (r3)           # Restore old value of 'X'
   ret                  # Return

You see that there is a bit more overhead on function entry and exit, to
manage the dynamic binding. But an interpreter is slower anyway.

However, the access to a symbol's value is acceptably short, just a
single pointer de-reference: mov (r1),r2



So how is the case for an interpreter in a lexical system?

You might build up the lexical environment as in A.1.

But where do you keep the information that 'Y' is the first entry and
'X' is the second? In compiled code, it is hard-coded. In the
interpreter, we cannot store that information in the value cells of 'X'
and 'Y', because then we would have dynamic behavior again. Using some
externally allocated memory structure, or an association list, is
prohibive because of the overhead for allocation/deallocation. We best
store it in the environment.

So we also have to keep pointers to 'X' and 'Y' in the environment.

C. ### interpreter/lexical:

C.1. Set up lexical environment on function entry
   sub   sp,16          # Allocate space on the stack for X and Y
   mov   arg1,12(sp)    # Store the first evaluated argument in the frame
   mov   'X',8(sp)      # Remember 'X'
   mov   arg2,4(sp)     # and the second
   mov   'Y',0(sp)      # Remember 'Y'

C.2. Access the value of 'X'
   ...
   Search for 'X' in the stack frame, and access the following location
   ...

C.3. Cleanup
   add   sp,12          # Restore stack pointer
   ret                  # Return

So the problem lies with C.2. The access to values of symbols can be
quite time consuming if the environment ist large. And symbol values are
typically accessed more than once in a function body.

Regards,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rob Warnock
Subject: Re: Interpreters (was: Lisp in hardware)
Date: 
Message-ID: <a_-dneG_IcMjvYfcRVn-rw@speakeasy.net>
Alexander Burger  <···@software-lab.de> wrote:
+---------------
| I'm quite surprised how little some people who posted to this thread
| seem to understand about the inner workings of an interpreter.
+---------------

Including yourself, perhaps?  ;-}

+---------------
| They keep on talking about the efficiency of lexical binding, which is
| true for compiled code, but which is not given for an interpreter.
+---------------

You're wrong. See Christian Queinnec's "Lisp in Small Pieces",
especially the chapter on "Fast Interpretation". This is all
very old technology which you have apparently missed...


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2nuo2bF4r9hnU2@uni-berlin.de>
Rob Warnock <····@rpw3.org> wrote:
> Alexander Burger  <···@software-lab.de> wrote:
> +---------------
> | I'm quite surprised how little some people who posted to this thread
> | seem to understand about the inner workings of an interpreter.
> +---------------

> Including yourself, perhaps?  ;-}

Possibly.

> +---------------
> | They keep on talking about the efficiency of lexical binding, which is
> | true for compiled code, but which is not given for an interpreter.
> +---------------

> You're wrong. See Christian Queinnec's "Lisp in Small Pieces",
> especially the chapter on "Fast Interpretation". This is all

I don't have the book. In the Web I find

  * Chapter 6: Fast interpretation Precompile expressions to speed up
                                   ^^^^^^^^^^

You, too, are confusing compilation with interpretation.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Ron Garret
Subject: Re: Interpreters
Date: 
Message-ID: <rNOSPAMon-21D92C.08473711082004@nntp1.jpl.nasa.gov>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> Rob Warnock <····@rpw3.org> wrote:
> > Alexander Burger  <···@software-lab.de> wrote:
> > +---------------
> > | I'm quite surprised how little some people who posted to this thread
> > | seem to understand about the inner workings of an interpreter.
> > +---------------
> 
> > Including yourself, perhaps?  ;-}
> 
> Possibly.
> 
> > +---------------
> > | They keep on talking about the efficiency of lexical binding, which is
> > | true for compiled code, but which is not given for an interpreter.
> > +---------------
> 
> > You're wrong. See Christian Queinnec's "Lisp in Small Pieces",
> > especially the chapter on "Fast Interpretation". This is all
> 
> I don't have the book. In the Web I find
> 
>   * Chapter 6: Fast interpretation Precompile expressions to speed up
>                                    ^^^^^^^^^^
> 
> You, too, are confusing compilation with interpretation.

No, what's going on here is that you have (tacitly) chosen a different 
definition of the word "interpreter" than everyone else is using.  On 
your definition, caching the results of a stack-frame search is 
compilation, not interpretation.  On that definition of compilation your 
claim is true.  But it seems to me like a pretty silly line to draw in 
the sand.  Obviously, code that takes advantage of a known optimization 
technique will be faster than code that doesn't (otherwise it would not 
be an optimization technique).  But quibbling over whether this 
particular technique constitutes "fast interpretation" or "partial 
compilation" seems to me rather like arguing about angels on the head of 
a pin.

rg
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2nv6asF4t7sbU1@uni-berlin.de>
Ron Garret <·········@flownet.com> wrote:
> No, what's going on here is that you have (tacitly) chosen a different 
> definition of the word "interpreter" than everyone else is using.  On 

I did not think it was chosen tacitly. I thought that everyone implies
that interpretation in Lisp operates on Lisp _data_ (i.e. what you get
by 'cons'ing and 'list'ing things together and evaluate it), which
happen in that interpretation context to be treated as code.

Then I tried to provide a more exact description by providing a few
handful lines of assembly code. Unfortuately nobody seems to take note,
but continue cheerfully to interprete (sic) my words in his own mindset.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Ron Garret
Subject: Re: Interpreters
Date: 
Message-ID: <rNOSPAMon-B05643.11525211082004@nntp1.jpl.nasa.gov>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> Ron Garret <·········@flownet.com> wrote:
> > No, what's going on here is that you have (tacitly) chosen a different 
> > definition of the word "interpreter" than everyone else is using.  On 
> 
> I did not think it was chosen tacitly. I thought that everyone implies
> that interpretation in Lisp operates on Lisp _data_ (i.e. what you get
> by 'cons'ing and 'list'ing things together and evaluate it), which
> happen in that interpretation context to be treated as code.

Yes, but that has nothing to do with the topic at hand (and the fact 
that you think it does indicates that you really have missed the point).  
Operating on Lisp data as code and having efficient lexical references 
are completely orthogonal issues.

> Then I tried to provide a more exact description by providing a few
> handful lines of assembly code. Unfortuately nobody seems to take note,
> but continue cheerfully to interprete (sic) my words in his own mindset.

We took note (at least I did) but, as I noted above, your code is 
irrelevant to the topic at hand.  That lexical references *can* be 
implemented inefficiently is not in dispute.

rg
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2nv8tuF4uvfnU2@uni-berlin.de>
Ron Garret <·········@flownet.com> wrote:
> In article <··············@uni-berlin.de>,
>  Alexander Burger <···@software-lab.de> wrote:

> > Then I tried to provide a more exact description by providing a few
> > handful lines of assembly code. Unfortuately nobody seems to take note,
> > but continue cheerfully to interprete (sic) my words in his own mindset.

> We took note (at least I did) but, as I noted above, your code is 
> irrelevant to the topic at hand.  That lexical references *can* be 
> implemented inefficiently is not in dispute.

Fine. If it can be implemented efficiently, the code must be shorter
than the one in my description. Should be easy, then. I'm sure somebody
will enlight me.

Looking forward,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Ron Garret
Subject: Re: Interpreters
Date: 
Message-ID: <rNOSPAMon-91D3B9.12423711082004@nntp1.jpl.nasa.gov>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> Ron Garret <·········@flownet.com> wrote:
> > In article <··············@uni-berlin.de>,
> >  Alexander Burger <···@software-lab.de> wrote:
> 
> > > Then I tried to provide a more exact description by providing a few
> > > handful lines of assembly code. Unfortuately nobody seems to take note,
> > > but continue cheerfully to interprete (sic) my words in his own mindset.
> 
> > We took note (at least I did) but, as I noted above, your code is 
> > irrelevant to the topic at hand.  That lexical references *can* be 
> > implemented inefficiently is not in dispute.
> 
> Fine. If it can be implemented efficiently, the code must be shorter
> than the one in my description. Should be easy, then. I'm sure somebody
> will enlight me.

Well, people have already pointed you to Quiennec's book, but the basic 
idea is very simple: you only search the lexical frames once and cache 
the result.  (Your response to this will no doubt be, "But that's 
compilation."  To which I respond, "By your definition, not by ours."  
Angels.  Pins.)

rg
From: ·········@random-state.net
Subject: Re: Interpreters
Date: 
Message-ID: <cffe32$5lnag$1@midnight.cs.hut.fi>
Alexander Burger <···@software-lab.de> wrote:

> I did not think it was chosen tacitly. I thought that everyone implies
> that interpretation in Lisp operates on Lisp _data_ (i.e. what you get
> by 'cons'ing and 'list'ing things together and evaluate it), which
> happen in that interpretation context to be treated as code.

/me has intense sense of deja-vu

Your belief is that only this is interpretation:

 http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.1

Everyone else here considers this interpretation too:

 http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.7

Ignorance of facts doesn't make them go away.

Cheers,

 -- Nikodemus                   "Not as clumsy or random as a C++ or Java. 
                             An elegant weapon for a more civilized time."
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2o105bF5auh1U1@uni-berlin.de>
·········@random-state.net wrote:
> Alexander Burger <···@software-lab.de> wrote:
> > I did not think it was chosen tacitly. I thought that everyone implies
> > that interpretation in Lisp operates on Lisp _data_ (i.e. what you get
> > by 'cons'ing and 'list'ing things together and evaluate it), which
> > happen in that interpretation context to be treated as code.

> Your belief is that only this is interpretation:
>  http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.1

Yes.


> Everyone else here considers this interpretation too:
>  http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.7

Yes, I would never call that interpretation.

Very funny, we discussed about single-pass Lisp snipplets, the necessity
and overhead of compilation, and the advantages and disadvantages of
simplicity. And now you come with that!

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Ron Garret
Subject: Re: Interpreters
Date: 
Message-ID: <rNOSPAMon-37F83F.09054512082004@nntp1.jpl.nasa.gov>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> > Everyone else here considers this interpretation too:
> >  http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.7
> 
> Yes, I would never call that interpretation.

Then you will always have trouble communicating.  SICP is one of the 
canonical texts on this sort of thing, and large segments of the 
community follow its terminological lead.

> Very funny, we discussed about single-pass Lisp snipplets, the necessity
> and overhead of compilation, and the advantages and disadvantages of
> simplicity. And now you come with that!

But the extra code required to optimize only lexical references (which 
is the topic under discussion) is very much less than that -- a half 
dozen LOC at most, still well within the realm of any reasonable 
person's definition of "simplicity".

rg
From: Gisle Sælensminde
Subject: Re: Interpreters
Date: 
Message-ID: <slrnchkfsa.p53.gisle@kaktus.ii.uib.no>
In article <··············@uni-berlin.de>, Alexander Burger wrote:
> Rob Warnock <····@rpw3.org> wrote:
>> Alexander Burger  <···@software-lab.de> wrote:
>> +---------------
>> | I'm quite surprised how little some people who posted to this thread
>> | seem to understand about the inner workings of an interpreter.
>> +---------------
> 
>> Including yourself, perhaps?  ;-}
> 
> Possibly.
> 
>> +---------------
>> | They keep on talking about the efficiency of lexical binding, which is
>> | true for compiled code, but which is not given for an interpreter.
>> +---------------
> 
>> You're wrong. See Christian Queinnec's "Lisp in Small Pieces",
>> especially the chapter on "Fast Interpretation". This is all
> 
> I don't have the book. In the Web I find
> 
>   * Chapter 6: Fast interpretation Precompile expressions to speed up
>                                    ^^^^^^^^^^
> 
> You, too, are confusing compilation with interpretation.

There is no conflict in having a compilation step in an interpreter. Many 
systems like perl, python, the CLISP common lisp interpreter etc, as well 
as non-jit java interpreters. In that case the code is first compiled into 
bytecode, and this bytecode is interpreted, this is unlike an interpreter
that interprets the source by traversing a datastructure created by parsing
the source.

It then seems like there is three commin way of executing a program

1)  source -> parsing into datastructure -> interpretation by traversing datastructure

2)  source -> compilation to byte code -> interpretation by simulating a virtual machine.

3)  source -> parsing -> machine code -> execution of native code

1) is clearly an interpreter, while 3) is clearly a compiler. For me it seems like
you and some of the other posters here disagree on whether 2) is an interpreter.
Such a system has both an interpreter and an compiler built in, but I think most
people will call it an interpreter, as long as it don't go to native code. As far
as I can interpret you posts, it seems like you consider only 1) as an real 
interpreter. Many of the posters that disagreed with you, seems to assume an
bytecode interpreter. For whether lexical or dynamic scoping is relativly most
efficient this matters. 







> 
> - Alex


-- 
Gisle S�lensminde, Phd student, Scientific programmer
Computational biology unit, University of Bergen, Norway
Email: ·····@cbu.uib.no
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2nv5m5F576chU1@uni-berlin.de>
Gisle S?lensminde <·····@kaktus.ii.uib.no> wrote:
> It then seems like there is three commin way of executing a program

> 1)  source -> parsing into datastructure -> interpretation by traversing datastructure

> 2)  source -> compilation to byte code -> interpretation by simulating a virtual machine.

> 3)  source -> parsing -> machine code -> execution of native code

> 1) is clearly an interpreter, while 3) is clearly a compiler. For me it seems like
> you and some of the other posters here disagree on whether 2) is an interpreter.
> Such a system has both an interpreter and an compiler built in, but I think most
> people will call it an interpreter, as long as it don't go to native code. As far
> as I can interpret you posts, it seems like you consider only 1) as an real 
> interpreter. Many of the posters that disagreed with you, seems to assume an
> bytecode interpreter. For whether lexical or dynamic scoping is relativly most
> efficient this matters. 

It depends on the level, or point of view.

When you look at the microcode level, even 3) is an interpreter.

When looking at the Lisp level, 2) is a compiler for me. Later, when the
bytecode is "executed", then of course there is a byte-code
_interpreter_, but we are in a completely different world. We are
_below_ the Lisp level.

You might call (as Duane Rettig did) the 'read' function a compiler.
Perfectly ok, but this is _above_ the Lisp level.

And so on.

The problem is that I naturally assumed - as we are in a Lisp group - we
were talking about the Lisp level view. If such an assumption is not
allowed, such discussions are pointless.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Duane Rettig
Subject: Re: Interpreters
Date: 
Message-ID: <4zn51tmek.fsf@franz.com>
Alexander Burger <···@software-lab.de> writes:

> Gisle S?lensminde <·····@kaktus.ii.uib.no> wrote:
> > It then seems like there is three commin way of executing a program
> 
> > 1)  source -> parsing into datastructure -> interpretation by traversing datastructure
> 
> > 2)  source -> compilation to byte code -> interpretation by simulating a virtual machine.
> 
> > 3)  source -> parsing -> machine code -> execution of native code
> 
> > 1) is clearly an interpreter, while 3) is clearly a compiler. For me it seems like
> > you and some of the other posters here disagree on whether 2) is an interpreter.
> > Such a system has both an interpreter and an compiler built in, but I think most
> > people will call it an interpreter, as long as it don't go to native code. As far
> > as I can interpret you posts, it seems like you consider only 1) as an real 
> > interpreter. Many of the posters that disagreed with you, seems to assume an
> > bytecode interpreter. For whether lexical or dynamic scoping is relativly most
> > efficient this matters. 
> 
> It depends on the level, or point of view.

and upon which assumptions you make explicit, and which you
hold to yourself.

> When you look at the microcode level, even 3) is an interpreter.
> 
> When looking at the Lisp level, 2) is a compiler for me. Later, when the
> bytecode is "executed", then of course there is a byte-code
> _interpreter_, but we are in a completely different world. We are
> _below_ the Lisp level.
> 
> You might call (as Duane Rettig did) the 'read' function a compiler.
> Perfectly ok, but this is _above_ the Lisp level.

I was keying on what you had said, not what you had apparently meant.
I'm not a mind-reader; I cannot react to implicit assumptions; only
explicit ones.

One thing you have still not clarified is when you consider compilation
to be bad, and when you consider compilation (at different levels) to
be good.  I'm interested in this answer, because I very seldom consider
compilations to be bad [one area of note where I do consider it to be
less than desirable is in the area of debuggability, but we're working
to narrow the gap at that level between the debuggability of compiled
code and that of interpreted code].

> And so on.
> 
> The problem is that I naturally assumed - as we are in a Lisp group - we
> were talking about the Lisp level view.

If you already knew that this was the case, the your mistake was in not
helping us to understand by _explicitly_ stating it, that you were
speaking of different levels than the one you knew us to be accustomed
to talk about.

> If such an assumption is not allowed, such discussions are pointless.

If such assumptions are not explicitly stated and understood by your
listeners (readers), then such discussions are pointless.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2o10o4F5auh1U2@uni-berlin.de>
Duane Rettig <·····@franz.com> wrote:
> One thing you have still not clarified is when you consider compilation
> to be bad, and when you consider compilation (at different levels) to
> be good.  I'm interested in this answer, because I very seldom consider
> compilations to be bad [one area of note where I do consider it to be

Isn't that easy? Use compilation when it is beneficial, and don't use it
if it is not. Like everywhere else, this depends on the circumstances.
No reason for dogmatism.

On the 'C' level the gain achieved through a compiler is big, let alone
that there are almost no 'C' interpreters. The 'C' level does not change
as often as the application levels, and the tasks performed here are
more primitive (easier to debug). So I gladly choose to use a compiled 'C'.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rob Warnock
Subject: Re: Interpreters
Date: 
Message-ID: <bpCdndK-YagczILcRVn-gg@speakeasy.net>
Gisle S�lensminde  <·····@kaktus.ii.uib.no> wrote:
+---------------
| Alexander Burger wrote:
| > Rob Warnock <····@rpw3.org> wrote:
| >> Alexander Burger  <···@software-lab.de> wrote:
| >> +---------------
| >> | They keep on talking about the efficiency of lexical binding, which is
| >> | true for compiled code, but which is not given for an interpreter.
| >> +---------------
| > 
| >> You're wrong. See Christian Queinnec's "Lisp in Small Pieces",
| >> especially the chapter on "Fast Interpretation". This is all
| > 
| > I don't have the book. In the Web I find
| >   * Chapter 6: Fast interpretation Precompile expressions to speed up
| 
| There is no conflict in having a compilation step in an interpreter.
+---------------

Agreed, though I would say it slightly differently: There is no conflict
in having a *preprocessing* step in an interpreter, which is precisely
what Chapter 6 of L.i.S.P. is all about. It is *NOT* about "compiling";
for that see Chapter 7 "ByteCode compilation" or Chapter 10 "Compilation
towards C".

The *preprocessing* steps mentioned in Chapter 6 still leave you with
S-expressions to be interpreted as usual; it's just that some of the
S-expressions have been re-arranged a little bit. For example, in a
system in which lexical environments are represented by vectors of
vectors of values [Queinnec offers other alternative implementations,
but vectors-of-vectors is one of the classic ones], during the
preprocessing step lexical variables are allocated offsets in the
current vector of values, and symbols which are in the scope of some
lexical variable of the same name are transformed into compact LEXVAR
objects which contain the "level" (the offset of in the vector-of-vectors
pointing to the value vector) and "offset" (index in the desired vector).
Thus at run time, when the interpreter sees a LEXVAR, it does *NOT* have
to look up anything in a symbol table -- it just double-indexes through
the vector-of-vectors.

The same can be done to global variables as well, in a manner that
can easily be extended to threading. That is, global variables can be
allocated an index in the "global variable value table" when first seen,
and during the preprocessing step symbols which are in the scope of
some global variable of the same name are transformed into compact
GLOVAR objects which contain the index of that global variable's value
in the global value table. Thus at run time, when the interpreter sees
a GLOVAR, it does *NOT* have to look up anything in a symbol table --
it just indexes into the global value table.

[The extension of GLOVAR handling to threads is left as an exercise for
the reader (hint from David Wheeler: "Any problem in computer science
can be solved with another level of indirection"), but suffice it to
say that both Corman Lisp and Franz's Allegro do something similar for
their global/special variables.]

Transformation of symbols into LEXVARs & GLOVARs can be done in a single
pass over the source S-expression, along with other transformations such
as macro expansion. While not providing as much performance improvement
as (say) byte-compiling, such preprocessing is *itself* fast, and the
resulting code can be interpreted much faster than un-preprocessed
S-expressions.

To hammer the point in: The result of such preprocessing is *still*
just an S-expression, *not* "compiled" code. The run-time process is
still "interpretation" of the internal S-expression representation.

+---------------
| Many systems like perl, python, the CLISP common lisp interpreter etc,
| as well as non-jit java interpreters. In that case the code is first
| compiled into bytecode, and this bytecode is interpreted.
+---------------

As noted above, there's even value to a preprocessing step which is
*less* than byte-code compilation.

+---------------
| It then seems like there is three commin way of executing a program
+---------------

Four ways. (See my added #1.5, below.)

+---------------
| 1)  source -> parsing into datastructure -> interpretation by traversing
| datastructure
+---------------

1.5) source -> parsing into data structure -> some preprocessing of the
     data structure -> interpretation by traversing the data structure.

+---------------
| 2)  source -> compilation to byte code -> interpretation by simulating a
| virtual machine.
| 
| 3)  source -> parsing -> machine code -> execution of native code
| 
| 1) is clearly an interpreter, while 3) is clearly a compiler. For me it
| seems like you and some of the other posters here disagree on whether
| 2) is an interpreter.
+---------------

Regardless of one's opinion of #2 [I personally consider it "compiling"],
clearly both #1 and #1.5 are "interpretation".


-Rob

p.s. And to be complete, we should mention #3.3 optimizing compilers,
#3.7 interprocedural optimizing compilers, #3.9 interprocedural optimizing
compilers tuned with feedback from actual trace statistics, etc., etc.

And JIT is something that starts with #1.5 or #2 and jumps to #3.x
"on demand"...

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Christophe Turle
Subject: Re: Interpreters
Date: 
Message-ID: <cfssaj$vki$1@amma.irisa.fr>
Rob Warnock wrote:
> Gisle S�lensminde  <·····@kaktus.ii.uib.no> wrote:
> +---------------
> | Alexander Burger wrote:
> | > Rob Warnock <····@rpw3.org> wrote:
> | >> Alexander Burger  <···@software-lab.de> wrote:
> | >> +---------------
> | >> | They keep on talking about the efficiency of lexical binding, which is
> | >> | true for compiled code, but which is not given for an interpreter.
> | >> +---------------
> | > 
> | >> You're wrong. See Christian Queinnec's "Lisp in Small Pieces",
> | >> especially the chapter on "Fast Interpretation". This is all
> | > 
> | > I don't have the book. In the Web I find
> | >   * Chapter 6: Fast interpretation Precompile expressions to speed up
> | 
> | There is no conflict in having a compilation step in an interpreter.
> +---------------
> 
> Agreed, though I would say it slightly differently: There is no conflict
> in having a *preprocessing* step in an interpreter, which is precisely
> what Chapter 6 of L.i.S.P. is all about. It is *NOT* about "compiling";
> for that see Chapter 7 "ByteCode compilation" or Chapter 10 "Compilation
> towards C".
> 
> The *preprocessing* steps mentioned in Chapter 6 still leave you with
> S-expressions to be interpreted as usual; it's just that some of the
> S-expressions have been re-arranged a little bit. For example, in a
> system in which lexical environments are represented by vectors of
> vectors of values [Queinnec offers other alternative implementations,
> but vectors-of-vectors is one of the classic ones], during the
> preprocessing step lexical variables are allocated offsets in the
> current vector of values, and symbols which are in the scope of some
> lexical variable of the same name are transformed into compact LEXVAR
> objects which contain the "level" (the offset of in the vector-of-vectors
> pointing to the value vector) and "offset" (index in the desired vector).
> Thus at run time, when the interpreter sees a LEXVAR, it does *NOT* have
> to look up anything in a symbol table -- it just double-indexes through
> the vector-of-vectors.
> 
> The same can be done to global variables as well, in a manner that
> can easily be extended to threading. That is, global variables can be
> allocated an index in the "global variable value table" when first seen,
> and during the preprocessing step symbols which are in the scope of
> some global variable of the same name are transformed into compact
> GLOVAR objects which contain the index of that global variable's value
> in the global value table. Thus at run time, when the interpreter sees
> a GLOVAR, it does *NOT* have to look up anything in a symbol table --
> it just indexes into the global value table.
> 
> [The extension of GLOVAR handling to threads is left as an exercise for
> the reader (hint from David Wheeler: "Any problem in computer science
> can be solved with another level of indirection"), but suffice it to
> say that both Corman Lisp and Franz's Allegro do something similar for
> their global/special variables.]
> 
> Transformation of symbols into LEXVARs & GLOVARs can be done in a single
> pass over the source S-expression, along with other transformations such
> as macro expansion. While not providing as much performance improvement
> as (say) byte-compiling, such preprocessing is *itself* fast, and the
> resulting code can be interpreted much faster than un-preprocessed
> S-expressions.
> 
> To hammer the point in: The result of such preprocessing is *still*
> just an S-expression, *not* "compiled" code. The run-time process is
> still "interpretation" of the internal S-expression representation.
> 
> +---------------
> | Many systems like perl, python, the CLISP common lisp interpreter etc,
> | as well as non-jit java interpreters. In that case the code is first
> | compiled into bytecode, and this bytecode is interpreted.
> +---------------
> 
> As noted above, there's even value to a preprocessing step which is
> *less* than byte-code compilation.
> 
> +---------------
> | It then seems like there is three commin way of executing a program
> +---------------
> 
> Four ways. (See my added #1.5, below.)
> 
> +---------------
> | 1)  source -> parsing into datastructure -> interpretation by traversing
> | datastructure
> +---------------
> 
> 1.5) source -> parsing into data structure -> some preprocessing of the
>      data structure -> interpretation by traversing the data structure.
> 
> +---------------
> | 2)  source -> compilation to byte code -> interpretation by simulating a
> | virtual machine.
> | 
> | 3)  source -> parsing -> machine code -> execution of native code
> | 
> | 1) is clearly an interpreter, while 3) is clearly a compiler. For me it
> | seems like you and some of the other posters here disagree on whether
> | 2) is an interpreter.
> +---------------
> 
> Regardless of one's opinion of #2 [I personally consider it "compiling"],
> clearly both #1 and #1.5 are "interpretation".


imho, #1.5 is a combination of :

1- translating the source code to another source code ( even if the two are S-exp based ). This step is a compilation one.
2- interpreting the new source code generated.

a clue for this is in the title : "Chapter 6: Fast interpretation Precompile expressions to ..."

ctu
From: Alexander Burger
Subject: Re: Interpreters
Date: 
Message-ID: <2nuapuF4trp7U1@uni-berlin.de>
Alexander Burger <···@software-lab.de> wrote:
> C.2. Access the value of 'X'
>    ...
>    Search for 'X' in the stack frame, and access the following location
>    ...

> So the problem lies with C.2. The access to values of symbols can be
> quite time consuming if the environment ist large. And symbol values are
> typically accessed more than once in a function body.

Actually, the situation is even worse:

As the lexical scope is very restricted (you cannot access free
variables), you'll have to check for a "special" variable before you
search the stack and global frames, and treat it differently. You'll
have to do that check at each access, because we are in an interpreter
and the symbol's type might have changed while the function is
executing.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Edi Weitz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87brhkusi1.fsf@bird.agharta.de>
On 9 Aug 2004 17:00:44 GMT, Alexander Burger <···@software-lab.de> wrote:

> please show me a Lisp interpreter faster than Pico Lisp

If I do this

  (de fac (n) (if (= n 0) 1 (* n (fac (- n 1)))))

in PicoLisp 2.0.13 and then evaluate (fac 5000) it takes about 15
seconds on my laptop.

With CLISP 2.33.2 and

  (defun fac (n) (if (= n 0) 1 (* n (fac (- n 1)))))

(fac 5000) needs less than 0.3 seconds without compiling the function.

Edi.

-- 

"Lisp doesn't look any deader than usual to me."
(David Thornley, reply to a question older than most languages)

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npsjqF38lclU1@uni-berlin.de>
Edi Weitz <········@agharta.de> wrote:
> On 9 Aug 2004 17:00:44 GMT, Alexander Burger <···@software-lab.de> wrote:

> > please show me a Lisp interpreter faster than Pico Lisp
> If I do this
>   (de fac (n) (if (= n 0) 1 (* n (fac (- n 1)))))
> in PicoLisp 2.0.13 and then evaluate (fac 5000) it takes about 15
> seconds on my laptop.

> With CLISP 2.33.2 and
>   (defun fac (n) (if (= n 0) 1 (* n (fac (- n 1)))))
> (fac 5000) needs less than 0.3 seconds without compiling the function.


Clever! However, I wrote:

> ... faster than Pico Lisp, regarding the bare evaluation mechanism.
                                           ^^^^^^^^^^^^^^^

I expected something like your example. You can't compare apples with
bananas, because Pico Lisp does all arithmetics in Bignums (I said, I
usually don't care so much about speed). You are solely testing the
bignum performance vs. probably short integers.

But we were talking about binding strategies. The 'tst' example in the
paper used only list operations.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Edi Weitz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87vffstamw.fsf@bird.agharta.de>
On 9 Aug 2004 18:03:41 GMT, Alexander Burger <···@software-lab.de> wrote:

> I expected something like your example. You can't compare apples
> with bananas,

And of course I expected you to reject any example that shows another
Lisp interpreter to be faster than yours... :)

> because Pico Lisp does all arithmetics in Bignums (I said, I usually
> don't care so much about speed). You are solely testing the bignum
> performance vs. probably short integers.

What do you mean by "probably short integers?" Both PicoLisp as well
as CLISP return the correct result. How do you compute (FAC 5000) with
"short" integers?

Edi.

-- 

"Lisp doesn't look any deader than usual to me."
(David Thornley, reply to a question older than most languages)

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nq0liF3bb0pU1@uni-berlin.de>
Edi Weitz <········@agharta.de> wrote:
> On 9 Aug 2004 18:03:41 GMT, Alexander Burger <···@software-lab.de> wrote:

> > I expected something like your example. You can't compare apples
> > with bananas,

> And of course I expected you to reject any example that shows another
> Lisp interpreter to be faster than yours... :)

Why do I have to say so often that simplicity, and not speed, was my
intention? I shouldn't have started a discussion about relative
execution speeds, because Pico Lisp is not optimized for that at all.
That's a dangerous mine field for me ;-)


> What do you mean by "probably short integers?" Both PicoLisp as well
> as CLISP return the correct result. How do you compute (FAC 5000) with
> "short" integers?

Sorry, I just noticed my mistake... I apologize for not closely looking
at what you wrote. But you might agree that you example doesn't measure
binding strategies.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Edi Weitz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87acx43x8k.fsf@bird.agharta.de>
On 9 Aug 2004 19:12:52 GMT, Alexander Burger <···@software-lab.de> wrote:

> But you might agree that you example doesn't measure binding
> strategies.

Sure. But at least it computes a value. You might agree that your
'tst' example from your paper doesn't do anything useful at all... :)
I'd rather look at real-world examples instead of measuring binding
strategies.

Cheers,
Edi.

-- 

"Lisp doesn't look any deader than usual to me."
(David Thornley, reply to a question older than most languages)

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nr4rfF3sb2mU1@uni-berlin.de>
Edi Weitz <········@agharta.de> wrote:
> On 9 Aug 2004 19:12:52 GMT, Alexander Burger <···@software-lab.de> wrote:

> > But you might agree that you example doesn't measure binding
> > strategies.

> Sure. But at least it computes a value. You might agree that your
> 'tst' example from your paper doesn't do anything useful at all... :)

Hm, computing the factorial of 5000 isn't very useful either.

> I'd rather look at real-world examples instead of measuring binding
> strategies.

The discussion was about binding stragegies, not arithmetics.

Real-world examples are applications, involving more than
isolated functions.

> Cheers,
> Edi.

Have a nice day,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Edi Weitz
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87vffs1kf2.fsf@bird.agharta.de>
On 9 Aug 2004 13:09:34 GMT, Alexander Burger <···@software-lab.de> wrote:

> Ok, probably it is no problem nowadays. As you may have noticed, my
> last contact with (non-self-written) Lisp was nearly 20 years
> ago. Most Lisps at that time used dynamic binding for the
> interpreter and lexical binding for the compiler, introducing
> significant differences.
>
> Then it depends on how you define "equivalent". Identical output for
> any given input? I strongly doubt that, considering the complexity
> of that matter.
>
> [...]
>
> Then you have the double disadvantage of
>
> - a complicated system (because it has to support the compiler)
>
> - and inefficient execution (because the interpreter will run with a
>   lexical binding strategy to stay compatible with the (now unused)
>   compiler. Lexical binding is not efficient in an interpreter).

So you've never tried any current Lisp implementation, you haven't
heard about Common Lisp or the ANSI standard, but you expect us to
believe that your Pico stuff is The Next Big Thing[TM] because it's
better than some mythical Lisp you happended to use two centuries ago?

Cool! Do you also have a BASIC interpreter that's better than the one
that came with my old Apple II? Why don't you try to sell it to the
VB.NET guys?

Cheers,
Edi.

-- 

"Lisp doesn't look any deader than usual to me."
(David Thornley, reply to a question older than most languages)

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npgggF32atnU1@uni-berlin.de>
Edi Weitz <········@agharta.de> wrote:
> So you've never tried any current Lisp implementation, you haven't
> heard about Common Lisp or the ANSI standard, but you expect us to
> believe that your Pico stuff is The Next Big Thing[TM] because it's
> better than some mythical Lisp you happended to use two centuries ago?

No, Pico Lisp is _not_ the next big thing. It is 20 years old by now.
And it is not supposed to be better in any way. It is _simpler_. It
gives me control at all levels down to the bare metal, and allows me to
write better programs, I hope ;-)

Why should I have cared about other Lisp systems, when I could put in
exactly those features that I liked? I did read the Common Lisp book by
Steel in the 1980s, though, and was sorry about what they did to the
beautiful clean Lisp language; designing such a monster. But I accept
that it is useful for many Lisp programmers, and would ask nobody to
abandon it.

> Cheers,
> Edi.

sincerely
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86isbstg4d.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

> Then you have the double disadvantage of
[...]
> - and inefficient execution (because the interpreter will run with a
>   lexical binding strategy to stay compatible with the (now unused)
>   compiler. Lexical binding is not efficient in an interpreter).

I implemented a Lisp-like interpreter in x86 assembler (one that walks
s-exps *g*) that supports lexical binding and closures. I found it
easier to implement than dynamic binding. My mini-interpreter does not
support dynamic binding, btw.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npuduF3fi8kU2@uni-berlin.de>
In article <··············@goldenaxe.localnet> you wrote:
> Alexander Burger <···@software-lab.de> writes:

> > Then you have the double disadvantage of
> [...]
> > - and inefficient execution (because the interpreter will run with a
> >   lexical binding strategy to stay compatible with the (now unused)
> >   compiler. Lexical binding is not efficient in an interpreter).

> I implemented a Lisp-like interpreter in x86 assembler (one that walks
> s-exps *g*) that supports lexical binding and closures. I found it
> easier to implement than dynamic binding. My mini-interpreter does not
> support dynamic binding, btw.

Nice to hear :-)

Concerning the binding strategy, I talked about less work for the
processor, not for the implementor.

It is not so much how to bind symbols. Though this is only a push of the
old value, followed by setting the new value, the main difference is the
runtime access of symbol values. In dynamic shallow binding, not lookup
is required, just a single pointer de-reference. And value access is
typically more frequent than function body entry/exit.


>                     ____________________________
>  Julian Stecklina  /  _________________________/
>   ________________/  /
>   \_________________/  LISP - truly beautiful

Cheers,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Hartmann Schaffer
Subject: Re: Lisp in hardware
Date: 
Message-ID: <fRTRc.266$H23.3135@newscontent-01.sprint.ca>
Alexander Burger wrote:
> ...
> Concerning the binding strategy, I talked about less work for the
> processor, not for the implementor.
> 
> It is not so much how to bind symbols. Though this is only a push of the
> old value, followed by setting the new value, the main difference is the
> runtime access of symbol values. In dynamic shallow binding, not lookup
> is required, just a single pointer de-reference. And value access is
> typically more frequent than function body entry/exit.

with lexical scoping, you can resolve the run time access immediately 
when you store the form.  no lookup  required at all.

hs
From: Thomas Schilling
Subject: Re: Lisp in hardware
Date: 
Message-ID: <opscgsvtdqtrs3c0@news.CIS.DFN.DE>
Alexander Burger wrote:

>> So why is this bad? Except the one-time overhead of compilation this
>> "equivalent" code beats "interpreting s-epxressions" hands-down...
>
> This is bad for several reasons:
>
> 1. The resulting program is not necessarily 100% equivalent.

But it's the purpose of a Lisp implemenetation to create a program that is 
100% equivalent to the source code. Otherwise it's a bug.

> 2. Many of my applications consist of plenty of one-time code, typically
>    GUI components. The file is loaded, executed, produces some side
>    effects (Applet layout), and is thrown away. A compilation pass would
>    be counter-productive.

The compilation takes part when you write the code and add it to your lisp 
image. So nothing is compiled at runtime unless you use eval. And I doubt 
you need that for the mentioned purpose. (Unless you don't have lexical 
closures, since you'll hardly eval s-exps input of the user)

> 3. The GUI components contain many little s-expressions (function bodies
>    or just simple lists) which may be later evaluated depending on user
>    actions like mouse click and button press. I doubt it makes sense to
>    compile such code fragments, which even cannot be meaningfully
>    executed outside their dynamic environment.

Where're these actions run? On the client or on the server? If they're run 
on the server it would even be safer to not store direct code in your gui 
but just the function name.

Parsing a list from text isn't slower in a compiled system.

> 4. It has great advantages to execute s-expressions. Not so much in
>    application programming, because the often-cited self-modifying Lisp
>    programs are rare.

Never heard of. Actually how should this work? First-class macros? Calls 
to eval at runtime? I guess the latter.

> But for the development of the Lisp system itself.

That may be.

>    I want full control over my programming system. It has to be simple,
>    so that I can understand and modify every aspect of it.
>
>    For example, tracing is done in Pico Lisp by cons'ing a trace symbol
>    in front of function or method bodies. The debugger (single stepper)
>    consists of a few dozen lines which constantly modify s-expressions
>    while they are executed. The same holds for the profiler.
>
> It is all so simple if you dare to abandon the compiler. Many lisp-ish
> (in my opinion) programming styles like (3) you don't even consider if
> you are caught in monolithic lexical compiler think.

I wouldn't want to miss lexical variables and closures, arrays, 
hash-tables, generic functions, structures, etc. just for the sake of a 
"real" lisp implementation.

-- 
      ,,
     \../   /  <<< The LISP Effect
    |_\\ _==__
__ | |bb|   | _________________________________________________
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npchoF38gh5U1@uni-berlin.de>
Thomas Schilling <······@yahoo.de> wrote:
> Alexander Burger wrote:

> > 3. The GUI components contain many little s-expressions (function bodies
> >    or just simple lists) which may be later evaluated depending on user
> >    actions like mouse click and button press. I doubt it makes sense to
> >    compile such code fragments, which even cannot be meaningfully
> >    executed outside their dynamic environment.

> Where're these actions run? On the client or on the server? If they're run 
> on the server it would even be safer to not store direct code in your gui 
> but just the function name.

The whole application logic runs on the server. The applet connects back
to the server, using a dedicated protocol. The server instructs the
applet to dynamically create the GUI components, and a corresponding
Lisp object resides on the server for each GUI component. All events
propagate from the applet to the server. The applet is completely dump.
I think it is safe.


> > 4. It has great advantages to execute s-expressions. Not so much in
> >    application programming, because the often-cited self-modifying Lisp
> >    programs are rare.

> Never heard of. Actually how should this work? First-class macros? Calls 

In the 1980s researchers propagated Lisp as an ideal AI language,
because Lisp programs can modify themselves while they run, and are thus
able to "learn". Even back then I thought this was nonsense.

> to eval at runtime? I guess the latter.

Just modify the function:

   (putd 'foo (some-list-operations-with (getd 'foo)))



> I wouldn't want to miss lexical variables and closures, arrays, 
> hash-tables, generic functions, structures, etc. just for the sake of a 
> "real" lisp implementation.

Sorry, you misunderstood me. I don't care about "real" Lisp. But I don't
want to have fancy features which complicate things for just a small
benefit. How often I wonder how much confusions exist about Common Lisp,
when I follow the questions asked in this group?

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86acx45v1z.fsf@goldenaxe.localnet>
Thomas Schilling <······@yahoo.de> writes:

>> 4. It has great advantages to execute s-expressions. Not so much in
>>    application programming, because the often-cited self-modifying Lisp
>>    programs are rare.
>
> Never heard of. Actually how should this work? First-class macros?
> Calls to eval at runtime? I guess the latter.

You could have a structure editor that edits the s-exp that a function
definition is made of in place. But then again, you could fire up
Emacs, edit the function and hit C-c C-c...

There are plenty of ways for a program to change parts of itself,
either.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Paul Wallich
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cf5msk$abm$1@reader1.panix.com>
Christopher C. Stacy wrote:

>>>>>>On 8 Aug 2004 07:34:07 GMT, Alexander Burger ("Alexander") writes:
> 
>  Alexander> Frank Buss <··@frank-buss.de> wrote:
>  >> But first I want to know how it was solved in other systems, like the 
>  >> Symbolics Lisp Machine. Where can I find a hardware description of the 
> 
>  Alexander> I believe that these machines were not really "Lisp Machines",
>  Alexander> because they had a normal processor architecture with just a few
>  Alexander> instructions optimized for compiled Lisp code. And such code is
>  Alexander> IMHO not "Lisp", because it was transformed into another
>  Alexander> language. I breaks, for example, the fundamental principle of
>  Alexander> the equivalence of code and data.
> 
> When people say the proper noun "Lisp Machine", they are generally
> referring to the MIT Lisp Machine (eg. the CADR) and the follow-on
> machines created by the same people at the commercial spin-offs,
> Symbolics and LMI.   (Xerox also had Lisp workstations that could
> rightly be called "Lisp machines", but around MIT we called those
> "D Machines", and I believe that's also what Xerox called them.)

Not-entirely-pedantic note: the D-machines (dolphin, dorado, 
dandewhatever) were designed to execute any of a number of 
virtual-machine instruction sets of which the Interlisp-D code was just 
one. With a different bunch of microcode loaded they became Smalltalk or 
Mesa machines, for example.

This kind of flexibility (along with the general principle of 
turing-equivalence) shows, imo, how incomplete such ideas as "the 
equivalence of cade and data" ultimately are. At some point you have to 
represent your lisp-stuff in terms of things that are not lisp-stuff, be 
it ASCII or Unicode standing in for the characters of s-expressions, or 
1's and 0's in a register, or organized collections of charge carriers 
or photons. And you will potentially (or even necessarily) be able to do 
things with that non-lisp stuff that violate lisp semantics. Towers of 
reflection notwithstanding, there's always going to be an 
implementation, and picking a particular metric to argue about its 
purity seems counterproductive to me.

paul
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <un015l1bd.fsf@news.dtpq.com>
>>>>> On Sun, 08 Aug 2004 13:11:48 -0400, Paul Wallich ("Paul") writes:

 Paul> Not-entirely-pedantic note: the D-machines (dolphin, dorado,
 Paul> dandewhatever) were designed to execute any of a number of
 Paul> virtual-machine instruction sets of which the Interlisp-D code was
 Paul> just one. With a different bunch of microcode loaded they became
 Paul> Smalltalk or Mesa machines, for example.

Yes, and likewise the CADR and 3600 were microcoded.  
(Also, the KL model of the PDP-10 was microcoded; the ITS operating
system had some different instructions than the DEC microcode, 
but not for anything to do with Lisp.)  Actually, lots of computers
were microcoded - we're only talking here about user-writable microcode, 
loaded as part of the cold boot sequence.  However, unlike the D Machines, 
neither the PDP-10 nor Lisp Machine ever had microcode that implemented a
radically different macroinstruction set than was intended.  Despite the 
fact that they included ("macro") compilers for many other languages
(FORTRAM, C, ADA, Pascal, Prolog, etc.), the Lisp Machines were only
microcoded to execute their Lisp macroinstruction set, as far as I remember.  
(Special purpose user microcoded functions aside, the closest thing is that
the 3600, I think,  had some microcoded support for Prolog unification.  
I don't remember exactly what that was all about though.)

But the D machines were also Smalltalk and Mesa machines!
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <uisbtl16z.fsf@news.dtpq.com>
By the way, I left out from my description the Scheme chip;
we didn't call that a "Lisp Machine", although obviously it 
was a kind of Lisp Machine.  These were never used in actual
workstations at the lab or commercially; they were strictly
research projects in compiler and VLSI technology.
From: Robert Swindells
Subject: Re: Lisp in hardware
Date: 
Message-ID: <pan.2004.08.09.21.03.25.73463@fdy2.demon.co.uk>
On Sun, 08 Aug 2004 23:22:43 +0000, Christopher C. Stacy wrote:

> By the way, I left out from my description the Scheme chip;
> we didn't call that a "Lisp Machine", although obviously it 
> was a kind of Lisp Machine.  These were never used in actual
> workstations at the lab or commercially; they were strictly
> research projects in compiler and VLSI technology.

As a further data point, there is a critique of the Scheme chip
in David Ungar's thesis.

His conclusion was that you really do need a compiler.

Robert Swindells
From: John Thingstad
Subject: Re: Lisp in hardware
Date: 
Message-ID: <opscgio9vvpqzri1@mjolner.upc.no>
Why bother with a lisp machine now!
They had their place when workstations were underpowered for the job.
Now, even my home PC run's lisp apps fine. If I needed more power I would  
run
one of the paralell lisp inplementations for supercomputers. (like the one  
created for the conection machine)
I fail to see need for a lisp workstation. Movitz, a lisp OS, is an  
alternative if yoy want a all lisp
environment.

On Sun, 08 Aug 2004 09:27:54 GMT, Christopher C. Stacy  
<······@news.dtpq.com> wrote:

>>>>>> On 8 Aug 2004 07:34:07 GMT, Alexander Burger ("Alexander") writes:
>  Alexander> Frank Buss <··@frank-buss.de> wrote:
>  >> But first I want to know how it was solved in other systems, like the
>  >> Symbolics Lisp Machine. Where can I find a hardware description of  
> the
>
>  Alexander> I believe that these machines were not really "Lisp  
> Machines",
>  Alexander> because they had a normal processor architecture with just a  
> few
>  Alexander> instructions optimized for compiled Lisp code. And such code  
> is
>  Alexander> IMHO not "Lisp", because it was transformed into another
>  Alexander> language. I breaks, for example, the fundamental principle of
>  Alexander> the equivalence of code and data.
>
> When people say the proper noun "Lisp Machine", they are generally
> referring to the MIT Lisp Machine (eg. the CADR) and the follow-on
> machines created by the same people at the commercial spin-offs,
> Symbolics and LMI.   (Xerox also had Lisp workstations that could
> rightly be called "Lisp machines", but around MIT we called those
> "D Machines", and I believe that's also what Xerox called them.)
>
> Your point, though, is that "Lisp Machine" was an inaccurate name.
> You seem to be asserting that those machines were "normal" and
> that they were not well optimized for compiling (executing?) Lisp.
>
> Symbolics Lisp Machines included three different machine architectures:
> CADR (the original MIT design), the 3600, and Ivory.  I programmed
> extensively on all of those, but did not need know much about their
> implementation to do so; I'm more familiar with the first and last.
> I am mostly interested in Ivory, which was the zenith of the Lisp
> Machine architecture.  Ivory was also ultimately ported from Symbolics
> hardware to software, executing a very high-performance Ivory emulation
> (on the only hot 64-bit hardware then available -- the DEC Alpha)
> where it was called the VLM ("Virtual Lisp Machine" aka "Open Genera").
>
> While the CADR in particular could be thought of as a general-purpose
> machine, it was most certainly designed to be good for executing Lisp.
> The later machines reflected more design experience and had even
> more optimizations.  The architecuture of all of these machines was
> unique enough that I would not call any of them "normal", compared
> to other CPUs at the time or todays architectures.
>
> Lisp is a general purpose language, in the larger picture relatively
> unconcerned with CARs and CDRs, so it is not surprising that a Lisp
> Machine is a good general purpose machine.  I would take issue with
> your suggestion that it is the number of instructions that matters,
> while also asking what you counted and what numbers you came up with.
> It seems to me that the support for data type (that's a mouthfull),
> calling conventions, and GC, are more important overall.  But all of
> the machines did have instructions for list operations, for example.
>
> I am very unclear on your notion of "equivalence" of code and data;
> in particular how this was "broken" by those machines.
>
> Could describe the alternate machine architecture that you have in
> mind, show examples of how you would compile it, and contrast that
> with how it would be more "optimized for compiling [I assume you
> really mean executing] Lisp code" than the realized Lisp Machines
> mentioned above?  If this is to be a virtual machine, a comparison
> with the VLM would be most interesting, especially an analysis of
> how the emulator will perform on the targetted real hardware.
>
> I would also be interested in an analysis of your machine architecture
> versus the Xerox D Machines (which I know very little about, but which
> I suspect had more Lisp machine instructions than the CADR).



-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Carl Shapiro
Subject: Re: Lisp in hardware
Date: 
Message-ID: <ouyd620zm7o.fsf@panix3.panix.com>
"John Thingstad" <··············@chello.no> writes:

> Why bother with a lisp machine now!
> They had their place when workstations were underpowered for the job.
> Now, even my home PC run's lisp apps fine. If I needed more power I
> would  run
> one of the paralell lisp inplementations for supercomputers. (like the
> one  created for the conection machine)
> I fail to see need for a lisp workstation. Movitz, a lisp OS, is an
> alternative if yoy want a all lisp
> environment.

Here, here.  Keyboards aside, the magic of the Lisp machine resides in
its wonderful intergrated software environment, not in the tricked-out
hardware.
From: Duncan Entwisle
Subject: Re: Lisp in hardware
Date: 
Message-ID: <family*.*entwisle*-6D9C43.16181809082004@newstrial.btopenworld.com>
In article <···············@panix3.panix.com>,
 Carl Shapiro <·············@panix.com> wrote:

> "John Thingstad" <··············@chello.no> writes:
> 
> > Why bother with a lisp machine now!
> 
> Here, here.  Keyboards aside, the magic of the Lisp machine resides in
> its wonderful intergrated software environment, not in the tricked-out
> hardware.

Fun?

I don't think Frank wants to implement a replica of a Lisp Machine.

Personally, I just want to experiment and learn, and as an electronics 
engineer I find my inspiration in hardware.



In article <············@newsreader2.netcologne.de>,
 Frank Buss <··@frank-buss.de> wrote: 
> I already have a Spartan-3 starter kit, too, and writing hardware 
> descriptions in Verilog is not much more difficult than writing programs 
> in C, so it should be easy to implement it with it.

Cool. I think the great thing about using the starter kit is that it's a 
common platform, so people can share their experiments.

Personally I'm a VHDL kind of guy - but I'll forgive you that ;-)

In the longer term you may be able to fit more into your chip if you 
think in hardware rather than software terms - that said, synthesis has 
come a long way. (Plus it may just be my innate hardware prejudice 
sneaking though :-)

(I hopw more people come join in and play - it sounds like a fun 
project, and the starter kit is cheap enough).

Cheers,
Duncan.
From: jan
Subject: Re: Lisp in hardware
Date: 
Message-ID: <fz6vic1d.fsf@iprimus.com.au>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Ivory was also ultimately ported from Symbolics hardware to
> software, executing a very high-performance Ivory emulation (on the
> only hot 64-bit hardware then available -- the DEC Alpha) where it
> was called the VLM ("Virtual Lisp Machine" aka "Open Genera").

Why did it have to be 64 bit hardware?

Could current CL implementations also benefit by being ported to 64
bit architectures?

-- 
jan
From: Christopher C. Stacy
Subject: Re: Lisp in hardware
Date: 
Message-ID: <u1xif7ya4.fsf@news.dtpq.com>
>>>>> On Tue, 10 Aug 2004 19:21:18 +1000, jan  ("jan") writes:

 jan> ······@news.dtpq.com (Christopher C. Stacy) writes:
 >> Ivory was also ultimately ported from Symbolics hardware to
 >> software, executing a very high-performance Ivory emulation (on the
 >> only hot 64-bit hardware then available -- the DEC Alpha) where it
 >> was called the VLM ("Virtual Lisp Machine" aka "Open Genera").

 jan> Why did it have to be 64 bit hardware?

Because memory words on the Lisp Machine were 40 bits wide.  
(The address space was 36 bits, with each memory cell also having
several more bits of data-type tag fetched in parallel.)
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <joswig-8ADB53.10522208082004@news-50.dca.giganews.com>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> Frank Buss <··@frank-buss.de> wrote:
> > I want to implement a processor core in a Xilinx FPGA 
> > (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
> > be possible to do something like Pico Lisp, which is not a normal compiler, 
> > but more an interpreter:
> 
> Actually, this was my initial intention when I developed Pico
> Lisp in the late 80s. To build that virtual machine in hardware.
> 
> I soon realized, however, that it wouldn't make much sense.
> A hardware implemenation - or implementing the interpreter in
> microcode - would probably not be much faster than an
> interpreter written in C or assembly, because the interpreter
> will completely fit into the processor cache and leave the
> bottlenecks with the memory accesses.
> 
> The only thing left would be the beauty and simplicity of the
> architecture.
> 
> 
> > But first I want to know how it was solved in other systems, like the 
> > Symbolics Lisp Machine. Where can I find a hardware description of the 
> 
> I believe that these machines were not really "Lisp Machines",
> because they had a normal processor architecture with just a few
> instructions optimized for compiled Lisp code.

Hmm. *I* don't believe that last paragraph.

The processors from Symbolics (derived from the earlier CADR)
are described in literature. 

Why don't you read the Symbolics 3600 Technical Summary
from 1983, come back here and tell us if you think
the processor architecture that is described there
is a 'normal processor architecture with just a few
instructions optimized for compiled Lisp code'?

Here is the book:

  http://bitsavers.org/pdf/symbolics/3600technicalSummary_Feb83.pdf

Start with page 79 (page 85 in the PDF): '3600 Hardware: Processor Architecture'


a few things for your kind consideration:

- generally the implementation of data types is optimized
  for usage with Lisp (like cdr-coding for lists, ...).
  The 3600 processor supported over 30 Lisp data types
  in hardware (like symbols, strings, bignums, coroutines,
  closures, nil, ...)

- memory words are extended to 36 bits (later to 40) with
  4 bits for tags and cdr-coding.
  For some data there are additional 4 bits for tags (leaving
  28 bits for data).

- tag checking at runtime in hardware

- tag comparison works in parallel to the ALU. The ADD instruction
  does an integer add in parallel to tag checking. If the
  arguments were in integers, the result is discarded and so on.

- the processor is a stack-oriented machine
  (a stack group is: control stack, binding stack, data stack) with stack buffers.
  Most instruction get the oprands from the stack and
  return the result(s) to the stack.

- the processor has support for (ephemeral) garbage collection

- and lots more

> And such code is
> IMHO not "Lisp", because it was transformed into another
> language.

Machine code on the Symbolics Lisp Machine is not Lisp. That's right.
Though some of the instructions corresponds to some
Lisp functionality.

> I breaks, for example, the fundamental principle of
> the equivalence of code and data.

If the Lisp system uses any kind of compiler, the code that
gets executed is not Lisp, but some machine or byte code.
But we learned to live with that.

> 
> - Alex
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nm9lsF2456dU1@uni-berlin.de>
Rainer Joswig <······@lisp.de> wrote:
> In article <··············@uni-berlin.de>,
>  Alexander Burger <···@software-lab.de> wrote:
> > I believe that these machines were not really "Lisp Machines",
> > because they had a normal processor architecture with just a few
> > instructions optimized for compiled Lisp code.

> Hmm. *I* don't believe that last paragraph.

Perhaps I said it a little bit too provocatively, but I think it holds
true, especially concerning the later points about machine vs. Lisp
code. It depends on what you mean with "Lisp Machines", machines
executing Lisp (which they are not), or machines with opcodes optimized
for Lisp compilers.


> The processors from Symbolics (derived from the earlier CADR)
> are described in literature. 

I'm aware of that literature and had read parts of it.


> - generally the implementation of data types is optimized
>   for usage with Lisp (like cdr-coding for lists, ...).
>   The 3600 processor supported over 30 Lisp data types
>   in hardware (like symbols, strings, bignums, coroutines,
>   closures, nil, ...)

> - memory words are extended to 36 bits (later to 40) with
>   4 bits for tags and cdr-coding.
>   For some data there are additional 4 bits for tags (leaving
>   28 bits for data).

> - tag checking at runtime in hardware

> - tag comparison works in parallel to the ALU. The ADD instruction
>   does an integer add in parallel to tag checking. If the
>   arguments were in integers, the result is discarded and so on.

> - the processor is a stack-oriented machine
>   (a stack group is: control stack, binding stack, data stack) with stack buffers.
>   Most instruction get the oprands from the stack and
>   return the result(s) to the stack.

> - the processor has support for (ephemeral) garbage collection

All these points boil down to special opcodes which have to be produced
by the compiler. You could, for example, implement them all in
microcode, or even as subroutines or exception handlers for
unimplemented instructions.

It is just the execution speed that changes. The basic architecture,
with a linear instruction pointer, branches and so on, is just the
normal thing.

A "real" Lisp Machine would directly execute s-expressions, like what
(I think) the OP Frank Buss has in mind.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: ·······@Yahoo.Com
Subject: Re: Lisp in hardware
Date: 
Message-ID: <REM-2004aug08-006@Yahoo.Com>
> From: Alexander Burger <···@software-lab.de>
> A "real" Lisp Machine would directly execute s-expressions

That's sloppy language. A "real" Lisp Machine would directly execute
pointy structures, what is built in memory after an s-expression is fed
as input to READ. You wouldn't ever want to directly execute
s-expressions which are the print representation.
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <joswig-24C24B.13170608082004@news-50.dca.giganews.com>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> A "real" Lisp Machine would directly execute s-expressions, like what
> (I think) the OP Frank Buss has in mind.

Then for example OpenMCL (or, say, SBCL) is not a real "Lisp system",
because it does not execute s-expressions. OpenMCL compiles everything
to machine code. The only real Lisp system
would be one that is interpreting s-expressions.

So, I think the concept of a 'real' Lisp Machine is
as interesting and practical as 'Pure Lisp'.

For me, a real Lisp Machine boils down to the following:

- the processor is optimized for Lisp (opcodes, stack architecture,
  data types, garbage collection, ...)

- the operating system for this machine is mostly written in Lisp for Lisp.
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nmi7iF1jun1U1@uni-berlin.de>
Rainer Joswig <······@lisp.de> wrote:
> In article <··············@uni-berlin.de>,
>  Alexander Burger <···@software-lab.de> wrote:

> > A "real" Lisp Machine would directly execute s-expressions, like what
> > (I think) the OP Frank Buss has in mind.

> Then for example OpenMCL (or, say, SBCL) is not a real "Lisp system",
> because it does not execute s-expressions. OpenMCL compiles everything
> to machine code. The only real Lisp system
> would be one that is interpreting s-expressions.

The term "Lisp system" is all right. From practical reasons, compiling a
Lisp source to machine code may have some speed advantages, but in
general I do object to the compilation of Lisp. I tried to explain that
in

   http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf

The term "Lisp Machine" which suggests a machine _consisting_ of Lisp
expressions, not _emuating_ them in another language (machine code).


> So, I think the concept of a 'real' Lisp Machine is
> as interesting and practical as 'Pure Lisp'.

Interesting, yes, and beautiful IMHO.

But I consider Pico Lisp a real Lisp Machine, though each opcode is
implemented in C instead of microcode or hard-wires.

And it is practical, indeed. I use it for almost all my daily work.

- Alex
-- 

   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <joswig-9452B3.14272408082004@news-50.dca.giganews.com>
In article <··············@uni-berlin.de>,
 Alexander Burger <···@software-lab.de> wrote:

> Rainer Joswig <······@lisp.de> wrote:
> > In article <··············@uni-berlin.de>,
> >  Alexander Burger <···@software-lab.de> wrote:
> 
> > > A "real" Lisp Machine would directly execute s-expressions, like what
> > > (I think) the OP Frank Buss has in mind.
> 
> > Then for example OpenMCL (or, say, SBCL) is not a real "Lisp system",
> > because it does not execute s-expressions. OpenMCL compiles everything
> > to machine code. The only real Lisp system
> > would be one that is interpreting s-expressions.
> 
> The term "Lisp system" is all right. From practical reasons, compiling a
> Lisp source to machine code may have some speed advantages, but in
> general I do object to the compilation of Lisp. I tried to explain that
> in
> 
>    http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf
> 
> The term "Lisp Machine" which suggests a machine _consisting_ of Lisp
> expressions, not _emuating_ them in another language (machine code).

Maybe to you. Not to me. For me it suggests that it is a kind
of computer that can execute Lisp efficiently and sits
on, under or near my desk. I have two of those Lisp
machines to the right of me just now. I can touch them.

> 
> > So, I think the concept of a 'real' Lisp Machine is
> > as interesting and practical as 'Pure Lisp'.
> 
> Interesting, yes, and beautiful IMHO.
> 
> But I consider Pico Lisp a real Lisp Machine, though each opcode is
> implemented in C instead of microcode or hard-wires.


I don't. For me a machine is something I can touch. A physical thing. A device.
Wikipedia, sense 1 in wordnet: "A machine is any mechanical or electrical device that transmits
  or modifies energy to perform or assist in the performance of tasks."

Pico Lisp is software.


> 
> And it is practical, indeed. I use it for almost all my daily work.
> 

Nice.

> - Alex
From: Ivan Boldyrev
Subject: Re: Lisp in hardware
Date: 
Message-ID: <h2apu1x6j9.ln2@ibhome.cgitftp.uiggm.nsc.ru>
On 8830 day of my life Rainer Joswig wrote:
>> But I consider Pico Lisp a real Lisp Machine, though each opcode is
>> implemented in C instead of microcode or hard-wires.
>
> I don't. For me a machine is something I can touch. A physical
> thing. A device.
> Wikipedia, sense 1 in wordnet: "A machine is any mechanical or
> electrical device that transmits or modifies energy to perform or
> assist in the performance of tasks."

Did you ever touch Finite State machine?  Or Turing machine?

Languages are full of synonyms.  In computer science (I mean
*science*, not industry) machine is something different.

-- 
Ivan Boldyrev

                                        | recursion, n:
                                        |       See recursion
From: Gareth McCaughan
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87fz6x1ywe.fsf@g.mccaughan.ntlworld.com>
Alexander Burger wrote:

> The term "Lisp Machine" which suggests a machine _consisting_ of Lisp
> expressions, not _emuating_ them in another language (machine code).

The term "Lisp Machine" gets its meaning mostly from the large
variety of things called by that name back in the 1980s. None
of them, so far as I know, executed S-expressions directly.

What exactly is wrong with "Lisp Machine" meaning "a machine
whose hardware and OS are adapted for running software in Lisp"?

-- 
Gareth McCaughan
.sig under construc
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nn1t3F2eacuU1@uni-berlin.de>
Gareth McCaughan <················@pobox.com> wrote:
> Alexander Burger wrote:

> > The term "Lisp Machine" which suggests a machine _consisting_ of Lisp
> > expressions, not _emuating_ them in another language (machine code).

> The term "Lisp Machine" gets its meaning mostly from the large
> variety of things called by that name back in the 1980s. None
> of them, so far as I know, executed S-expressions directly.

> What exactly is wrong with "Lisp Machine" meaning "a machine
> whose hardware and OS are adapted for running software in Lisp"?

Hmm, this could be anything ...

Ok, I understand that the term "Lisp Machine" was widely accepted. I
just always had the feeling that it was not appropriate.

> -- 
> Gareth McCaughan
> .sig under construc

The original question of this thread was whether it makes sense to
implement an s-expression-executor like Pico Lisp in hardware. In a
previous post to this thread I wrote that it wouldn't make much sense,
but now I rather begin to like the idea. Frank, are you going to try it?

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86vfftmb2m.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

> The term "Lisp system" is all right. From practical reasons, compiling a
> Lisp source to machine code may have some speed advantages, but in
> general I do object to the compilation of Lisp. I tried to explain that
> in
>
>    http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf

On the second page you claim that "... macros were introduced to
satisfy the needs of compilers." I think you have to explain that
one.

Also you claim that walking S-expressions is not "interpreting". That
is rather odd. To stick with wikipedia: "An interpreter is a computer
program that executes other programs." It does not matter in what form
that other program is stored. 

It is also doubtful, whether a typical program spents most of its time
in built-in functions. Allocating storage might be an example, but
with proper type inferencing and "compiler magic" allocated storage
can decrease quite significantly. e.g. when dealing with numbers.

Some paragraphs below you compare Pico Lisps performance to that of
CLISP. Just for the record: CLISP compiles s-exps to byte-code and
interprets that. So by your own argument (where you talk about Java
byte-code) it must be slower than Pico Lisp. Try to compare to CMUCL,
SBCL, MCL ...

By the way: With the list being the only data type for sequences, how
do you implement algorithms that depend on O(1) array access? Several
graph algorithms come to mind...

And what about lexical closures...

I agree that PicoLisp is a very practical approach for some problems,
but it certainly is not for most them.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Pascal Bourguignon
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87pt61ku6a.fsf@thalassa.informatimago.com>
Julian Stecklina <··········@web.de> writes:
> By the way: With the list being the only data type for sequences, how
> do you implement algorithms that depend on O(1) array access? Several
> graph algorithms come to mind...

That's easy:

    (defun aref (array index)
       (do ((result)
            (i 0 (1+ i))
            (items array (cdr items)))
           ((= i most-positive-fixnum result) result)
        (when (= i index) (setf result (car items)))))

Et hop! O(1).

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86u0vc611q.fsf@goldenaxe.localnet>
Pascal Bourguignon <····@mouse-potato.com> writes:

> Julian Stecklina <··········@web.de> writes:
>> By the way: With the list being the only data type for sequences, how
>> do you implement algorithms that depend on O(1) array access? Several
>> graph algorithms come to mind...
>
> That's easy:
>
>     (defun aref (array index)
>        (do ((result)
>             (i 0 (1+ i))
>             (items array (cdr items)))
>            ((= i most-positive-fixnum result) result)
>         (when (= i index) (setf result (car items)))))
>
> Et hop! O(1).

Looks more like O(n) to me.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Pascal Bourguignon
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87fz6wlgv6.fsf@thalassa.informatimago.com>
Julian Stecklina <··········@web.de> writes:

> Pascal Bourguignon <····@mouse-potato.com> writes:
> 
> > Julian Stecklina <··········@web.de> writes:
> >> By the way: With the list being the only data type for sequences, how
> >> do you implement algorithms that depend on O(1) array access? Several
> >> graph algorithms come to mind...
> >
> > That's easy:
> >
> >     (defun aref (array index)
> >        (do ((result)
> >             (i 0 (1+ i))
> >             (items array (cdr items)))
> >            ((= i most-positive-fixnum result) result)
> >         (when (= i index) (setf result (car items)))))
> >
> > Et hop! O(1).
> 
> Looks more like O(n) to me.

Look again!  

For all index in [0 .. (1- (length array))] the exact same number of
operations will be executed. Namely: M increments, M cdr, 2*M =, 1
setf, 1 car.

For all index <0 or >=(length array), the number of operations
executed is only slightly bity bit smaller: M increments, M cdr, 2*M =.

With M the constant: most-positive-fixnum.

O(constant) = O(1).

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86ekmg5wha.fsf@goldenaxe.localnet>
Pascal Bourguignon <····@mouse-potato.com> writes:

> With M the constant: most-positive-fixnum.
>
> O(constant) = O(1).

Creative. But useless, nevertheless. :P

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nos71F2t29nU1@uni-berlin.de>
Julian Stecklina <··········@web.de> wrote:
> On the second page you claim that "... macros were introduced to
> satisfy the needs of compilers." I think you have to explain that
> one.

Perhaps I have to state it the other way round: Macros do not make much
sense in an interpreter, because they are rather inefficient here due to
double evaluation. It is better then to implement the desired
functionality as a function instead of a macro.

Having said that, I _do_ occasionally use Macros also in an interpreter,
because they make the code more readable.

But Lisp compilers depend on them. I don't know the current state of
art, but 20 years ago it used to be so. The compiler did not support all
necessary primitives. It implemented a few of them, and all others were
defined as macros (for example, all conditional branching functions like
'if', 'when' etc. were converted to 'cond').


> Also you claim that walking S-expressions is not "interpreting". That
> is rather odd. To stick with wikipedia: "An interpreter is a computer
> program that executes other programs." It does not matter in what form
> that other program is stored. 

Ok, the readers of this group know probably what is meant, but I did
often meet people who were suspicios about an interpreter because they
believed it operates on a textual basis.

Besides this, and more important, is that the term "interpreting"
connotates "explaining" and "searching for a meaning", which is typical
for textual interpreters who have to lookup tokens in a symbol table
(like the Lisp reader). Walking s-expressions involves no searching.


> It is also doubtful, whether a typical program spents most of its time
> in built-in functions. Allocating storage might be an example, but
> with proper type inferencing and "compiler magic" allocated storage
> can decrease quite significantly. e.g. when dealing with numbers.

In Lisp, many built-in functions like list manipulations or bignum
arithmetics do very heavy work, as opposed to a bytecode which typically
only moves a few bits around. Take an extreme but typical example (in
Pico Lisp syntax):

: (mapcar + (1 2 3 4) (5 6 7 8 9))
-> (6 8 10 12)

After 'mapcar' has evaluated its arguments, the whole process runs in
full 'C' speed, no more glue involved.


> Some paragraphs below you compare Pico Lisps performance to that of
> CLISP. Just for the record: CLISP compiles s-exps to byte-code and
> interprets that. So by your own argument (where you talk about Java
> byte-code) it must be slower than Pico Lisp. Try to compare to CMUCL,
> SBCL, MCL ...

When I wrote that, CLISP was the only Lisp I had access to, and I did
not know (or care) what it compiled to.

But this case even supports my opinion that people blindly believe in
(or believe to need) compilers, without checking the results.

I expected, of course, a compiled Lisp to be significantly faster than
Pico Lisp. And this will surely be the case with many other Lisps. But
according to my experiences, the raw speed gain is not worth all the
disadvantages. That's what I tried to explain in that paper.

20 years ago I was also a speed fanatic. I coded anything possible in
assembly language. I hope I'm more wise by now ;-)


> By the way: With the list being the only data type for sequences, how
> do you implement algorithms that depend on O(1) array access? Several
> graph algorithms come to mind...

This argument appears very often in such discussions. Do we really need
arrays, vectors and strings, as is often proudly stated about the
benefits of "modern" Lisp?

First of all, it has great advantages to use lists for sequential data,
because of the wealth of list processing functions the language
provides.

Sure, it is inefficient if you want to access the 1000th element in a
List. But how often does that happen? I cannot recall a single case. In
Lisp, you don't iterate over a sequence with an index like e.g. in C or
Java

   for (i = 0; i < arr.length; ++i)
      ... a[i] ...;

but you typically use one of the mapping functions. Or you search the
list sequentially, or perform some other operation while traversing it.

Such a long list more likely smells like a design bug. To handle large
amounts of uniform data, I rather use more structured data like nested
(and thus shorter) lists or trees (also implemented in lists, of
course).

For small amounts of data, the overhead for let's say 'cdddr' or 'nth'
is usually acceptable.

Again, this is again a matter of "execution speed" versus "power to the
programmer" and "ease of handling". I gladly abstain from the first if I
can gain the other two.


> And what about lexical closures...

Admitted, that's a delicate point. But there are conventions in Pico
Lisp. Please read the documentation about transient symbols, and the FAQ
about dynamic binding.


> I agree that PicoLisp is a very practical approach for some problems,
> but it certainly is not for most them.

Perhaps you can give some examples?
Anyway, nobody is forced to use it. There are plenty of useful
programming languages and everybody should use what he thinks most
suited.

> Regards,
> -- 
>                     ____________________________
>  Julian Stecklina  /  _________________________/

Ciao,
- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86llgo5zoq.fsf@goldenaxe.localnet>
Alexander Burger <···@software-lab.de> writes:

> Perhaps I have to state it the other way round: Macros do not make much
> sense in an interpreter, because they are rather inefficient here due to
> double evaluation. It is better then to implement the desired
> functionality as a function instead of a macro.

It is true that you can write some of the more common stuff one does
with macros via functions. But what about:

(defh sum-tree
    (?a ?b) -> (+ (sum-tree a)
		  (sum-tree b))
    ?a      -> a)

This function sums up all all leafs in a tree consting of two-element
lists. And it is declared using a macro that introduced pattern
matching. This is purely user convenience, nothing to do with the
compiler.
(I wrote this macro to play around with predicate logic formulas, but
it is nice for a number of list related things.)

>> I agree that PicoLisp is a very practical approach for some problems,
>> but it certainly is not for most them.
>
> Perhaps you can give some examples?

No convincing examples. :)

> Anyway, nobody is forced to use it. There are plenty of useful
> programming languages and everybody should use what he thinks most
> suited.

Ack.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful
From: ·········@random-state.net
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cf7tmu$59gmv$1@midnight.cs.hut.fi>
Alexander Burger <···@software-lab.de> wrote:

> Perhaps I have to state it the other way round: Macros do not make much
> sense in an interpreter, because they are rather inefficient here due to
> double evaluation. It is better then to implement the desired
> functionality as a function instead of a macro.

In an basic _evaluator_, you mean. An interpreter can "trivially" expand
macros only once should it want to. If you have't read SICP, maybe you
should...

> Besides this, and more important, is that the term "interpreting"
> connotates "explaining" and "searching for a meaning", which is typical
> for textual interpreters who have to lookup tokens in a symbol table
> (like the Lisp reader). Walking s-expressions involves no searching.

Interpreters walk internal data structures, "interpreting" them.
Evaluators are interpreters for parse-trees (the list structure in case of
lisp). SICP-style analyzing interpreters interpret chains of closures by
funcalling them. VM's are interpeters for bytecode. Your CPU is an
interpreter for machine code. The internal data structures themselves may
have more or less resemblance to the original code -- often less then
more.

"Lisp Machine" has a meaning: they existed and the term referred to them.
It may apply to other computers in the future -- and my belief is that
then the operative definition will have more to do with having lisp from
head to toes, and less with hardware support for language. We shall see;
but notice how I indicated that a part of the above statement has no
factual basis beyond my imagination.

"Interpreter" is a word with an accepted meaning. Deciding that only a
single class of interpreters are really interpreters is either stupid or
arrogant. 

> But this case even supports my opinion that people blindly believe in
> (or believe to need) compilers, without checking the results.

Funny you should say so. "Why do you see the speck that is in your
brother's eye, but do not notice the log that is in your own eye? Or how
can you say to your brother, `Let me take the speck out of your eye,' when
there is the log in your own eye? You hypocrite, first take the log out of
your own eye, and then you will see clearly to take the speck out of your
brother's eye."

> This argument appears very often in such discussions. Do we really need
> arrays, vectors and strings, as is often proudly stated about the
> benefits of "modern" Lisp?

Yes.

> Again, this is again a matter of "execution speed" versus "power to the
> programmer" and "ease of handling". I gladly abstain from the first if I
> can gain the other two.

This sounds very much like you've been deeply influenced by Paul Graham
and Arc. He may be kook, but he's a smart kook: like he has said, there
will always be applications where there is never enough computing power
available. Those applications also tend to be quite hard, and therefore
very good candidates for lisp -- but with your approach the computation
that would have taken "just" a month will take a years, or several days
instead of an hour, etc. 

PG's approach to this same issue is tempered by his understanding of
compilation: he speaks of compiler annotations allowing the "list" to be
laid out and accessed as an array, etc. It may be a pie in the sky, but at
least it's a pie.

Cheers,

 -- Nikodemus                   "Not as clumsy or random as a C++ or Java. 
                             An elegant weapon for a more civilized time."
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npdp9F37ch2U1@uni-berlin.de>
·········@random-state.net wrote:
> Alexander Burger <···@software-lab.de> wrote:

> "Interpreter" is a word with an accepted meaning. Deciding that only a
> single class of interpreters are really interpreters is either stupid or
> arrogant. 

You are citing out of context. Nobody said that.


> > This argument appears very often in such discussions. Do we really need
> > arrays, vectors and strings, as is often proudly stated about the
> > benefits of "modern" Lisp?

> Yes.

Why?


> > Again, this is again a matter of "execution speed" versus "power to the
> > programmer" and "ease of handling". I gladly abstain from the first if I
> > can gain the other two.

> This sounds very much like you've been deeply influenced by Paul Graham
> and Arc.

Not at all. Heard about him just recently.

> He may be kook, but he's a smart kook: like he has said, there
> will always be applications where there is never enough computing power
> available. Those applications also tend to be quite hard, and therefore
> very good candidates for lisp -- but with your approach the computation
> that would have taken "just" a month will take a years, or several days
> instead of an hour, etc. 

Hm, you didn't get my point. I don't write slow applications. There are
intelligent ways to optimize. It's just stupid to optimize too early and
end up using a bloated system.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Thomas Schilling
Subject: Re: Lisp in hardware
Date: 
Message-ID: <opscgyu1mctrs3c0@news.CIS.DFN.DE>
Alexander Burger wrote:

>> > This argument appears very often in such discussions. Do we really 
>> need
>> > arrays, vectors and strings, as is often proudly stated about the
>> > benefits of "modern" Lisp?
>
>> Yes.
>
> Why?

Convenience. And efficiency.

Indeed I don't care about how stuff is implemented as long as the concept 
(ie. api) is supported and as long as I can get the speed (and/or space 
efficiency) I need.
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2npidtF3bar0U1@uni-berlin.de>
Thomas Schilling <······@yahoo.de> wrote:
> Alexander Burger wrote:

> >> > This argument appears very often in such discussions. Do we really 
> >> need
> >> > arrays, vectors and strings, as is often proudly stated about the
> >> > benefits of "modern" Lisp?
> >
> >> Yes.
> >
> > Why?

> Convenience. And efficiency.

Great! We are back at the very beginning of that whole discussion.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Will Hartung
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nsj9gF4agaaU1@uni-berlin.de>
"Alexander Burger" <···@software-lab.de> wrote in message
···················@uni-berlin.de...
> Thomas Schilling <······@yahoo.de> wrote:
> > Alexander Burger wrote:
>
> > >> > This argument appears very often in such discussions. Do we really
> > >> need
> > >> > arrays, vectors and strings, as is often proudly stated about the
> > >> > benefits of "modern" Lisp?
> > >
> > >> Yes.
> > >
> > > Why?
>
> > Convenience. And efficiency.
>
> Great! We are back at the very beginning of that whole discussion.

He does have an interesting point.

Without a doubt, an Array is a very important structure in any system.

However, for many applications (perhaps not yours), the O(1) random access
behavior of the array is not particularly important. We have a gazillion
lines of code in our monster DB backed web application here in Java, and I
would be hard pressed to find any point where we randomly address the
contents of an array, and not simply iterate through it. Potentially the
most appropriate instance would be Strings, but we mostly append them (and
since Java Strings are immutable, the system simply iterates over each piece
and creates a new one).

Actually, the one place we probably use this facility the most (though
shrouded by interface) would be in Hash Tables, which are no doubt levered
on top of a random access vector. But even then, most hash tables only
surpass simple iterative searches after some threshold of items, and many of
our hash table may well be "costing" us performance because they don't cross
that threshold.

Now, certainly, a vector in memory is "cheaper" to even iterate over than
linked cells, and also more space efficient, however when the whole system
is "fast enough", the overhead of making those decisions at a coding level
may not warrant the potential performance increases.

Regards,

Will Hartung
(·····@msoft.com)
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfbbdk$h13$1@newsreader2.netcologne.de>
"Will Hartung" <·····@msoft.com> wrote:

> Now, certainly, a vector in memory is "cheaper" to even iterate over
> than linked cells, and also more space efficient, however when the
> whole system is "fast enough", the overhead of making those decisions
> at a coding level may not warrant the potential performance increases.

but if all your types are an 8 byte cell with tag bits, like in Pico Lisp, 
how do you store plain bytes without extra bits? This could be used for 
example for a framebuffer for graphics output or for a wave file for sound 
output. Of course, if anything is written in Lisp, even the loop which 
reads the memory and moves the bytes to a video digital to analog converter 
register, or the hardware video loop can read a linked list, then it could 
be stored in a list, but it would be a waste of memory.

A clean architecture is nice, but not if I need 8 bytes to store 1 byte. At 
least for a Lisp CPU a byte-array is necessary.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Will Hartung
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nt2d4F4drr4U1@uni-berlin.de>
"Frank Buss" <··@frank-buss.de> wrote in message
·················@newsreader2.netcologne.de...
> "Will Hartung" <·····@msoft.com> wrote:
>
> > Now, certainly, a vector in memory is "cheaper" to even iterate over
> > than linked cells, and also more space efficient, however when the
> > whole system is "fast enough", the overhead of making those decisions
> > at a coding level may not warrant the potential performance increases.
>
> but if all your types are an 8 byte cell with tag bits, like in Pico Lisp,
> how do you store plain bytes without extra bits? This could be used for
> example for a framebuffer for graphics output or for a wave file for sound
> output. Of course, if anything is written in Lisp, even the loop which
> reads the memory and moves the bytes to a video digital to analog
converter
> register, or the hardware video loop can read a linked list, then it could
> be stored in a list, but it would be a waste of memory.

No, it was more an observation on basically how LITTLE an O(1) Random Access
data structure is actually necessary in the generic DP kind of work that *I*
do. A simple example is that the scripting language we have for our system
doesn't even have the concept of an array, yet we've managed to write 50,000
lines of stuff in it to get our work done. It's mostly simple expressions
and glue tying high level calls together.

I'm sure if Pico Lisp found itself popular in an environment where direct
access to a framebuffer were important, it would be extended (and, to be
fair, made more complicated) to handle those structures efficiently. As is,
it is "efficient enough" for his purposes. He also cheats by being able to
escape to C whenever he wants to (so all of a sudden, that framebuffer is
now an opaque pointer type processed "outside" of PL, for example).

Obviously this is not practical for your CPU, and you need something a bit
more flexible and efficient.

Anyway, his abandonment of Arrays just made me puzzle and think about them
in my work, and while I wouldn't want to give them up, it's just interesting
in hindsight that I simply don't use them.

Regards,

Will Hartung
(·····@msoft.com)
From: John Thingstad
Subject: Re: Lisp in hardware
Date: 
Message-ID: <opscgvpqsepqzri1@mjolner.upc.no>
sigh! It never ends..

On 9 Aug 2004 08:50:43 GMT, Alexander Burger <···@software-lab.de> wrote:

> Julian Stecklina <··········@web.de> wrote:
>> On the second page you claim that "... macros were introduced to
>> satisfy the needs of compilers." I think you have to explain that
>> one.
>
> Perhaps I have to state it the other way round: Macros do not make much
> sense in an interpreter, because they are rather inefficient here due to
> double evaluation. It is better then to implement the desired
> functionality as a function instead of a macro.
>
> Having said that, I _do_ occasionally use Macros also in an interpreter,
> because they make the code more readable.
>
> But Lisp compilers depend on them. I don't know the current state of
> art, but 20 years ago it used to be so. The compiler did not support all
> necessary primitives. It implemented a few of them, and all others were
> defined as macros (for example, all conditional branching functions like
> 'if', 'when' etc. were converted to 'cond').
>
>
>> Also you claim that walking S-expressions is not "interpreting". That
>> is rather odd. To stick with wikipedia: "An interpreter is a computer
>> program that executes other programs." It does not matter in what form
>> that other program is stored.
>
> Ok, the readers of this group know probably what is meant, but I did
> often meet people who were suspicios about an interpreter because they
> believed it operates on a textual basis.
>
> Besides this, and more important, is that the term "interpreting"
> connotates "explaining" and "searching for a meaning", which is typical
> for textual interpreters who have to lookup tokens in a symbol table
> (like the Lisp reader). Walking s-expressions involves no searching.
>
>
>> It is also doubtful, whether a typical program spents most of its time
>> in built-in functions. Allocating storage might be an example, but
>> with proper type inferencing and "compiler magic" allocated storage
>> can decrease quite significantly. e.g. when dealing with numbers.
>
> In Lisp, many built-in functions like list manipulations or bignum
> arithmetics do very heavy work, as opposed to a bytecode which typically
> only moves a few bits around. Take an extreme but typical example (in
> Pico Lisp syntax):
>
> : (mapcar + (1 2 3 4) (5 6 7 8 9))
> -> (6 8 10 12)
>
> After 'mapcar' has evaluated its arguments, the whole process runs in
> full 'C' speed, no more glue involved.
>
>
>> Some paragraphs below you compare Pico Lisps performance to that of
>> CLISP. Just for the record: CLISP compiles s-exps to byte-code and
>> interprets that. So by your own argument (where you talk about Java
>> byte-code) it must be slower than Pico Lisp. Try to compare to CMUCL,
>> SBCL, MCL ...
>
> When I wrote that, CLISP was the only Lisp I had access to, and I did
> not know (or care) what it compiled to.
>
> But this case even supports my opinion that people blindly believe in
> (or believe to need) compilers, without checking the results.
>
> I expected, of course, a compiled Lisp to be significantly faster than
> Pico Lisp. And this will surely be the case with many other Lisps. But
> according to my experiences, the raw speed gain is not worth all the
> disadvantages. That's what I tried to explain in that paper.
>
> 20 years ago I was also a speed fanatic. I coded anything possible in
> assembly language. I hope I'm more wise by now ;-)
>
>
>> By the way: With the list being the only data type for sequences, how
>> do you implement algorithms that depend on O(1) array access? Several
>> graph algorithms come to mind...
>
> This argument appears very often in such discussions. Do we really need
> arrays, vectors and strings, as is often proudly stated about the
> benefits of "modern" Lisp?
>
> First of all, it has great advantages to use lists for sequential data,
> because of the wealth of list processing functions the language
> provides.
>
> Sure, it is inefficient if you want to access the 1000th element in a
> List. But how often does that happen? I cannot recall a single case. In
> Lisp, you don't iterate over a sequence with an index like e.g. in C or
> Java
>
>    for (i = 0; i < arr.length; ++i)
>       ... a[i] ...;
>
> but you typically use one of the mapping functions. Or you search the
> list sequentially, or perform some other operation while traversing it.
>
> Such a long list more likely smells like a design bug. To handle large
> amounts of uniform data, I rather use more structured data like nested
> (and thus shorter) lists or trees (also implemented in lists, of
> course).
>
> For small amounts of data, the overhead for let's say 'cdddr' or 'nth'
> is usually acceptable.
>
> Again, this is again a matter of "execution speed" versus "power to the
> programmer" and "ease of handling". I gladly abstain from the first if I
> can gain the other two.
>
>
>> And what about lexical closures...
>
> Admitted, that's a delicate point. But there are conventions in Pico
> Lisp. Please read the documentation about transient symbols, and the FAQ
> about dynamic binding.
>
>
>> I agree that PicoLisp is a very practical approach for some problems,
>> but it certainly is not for most them.
>
> Perhaps you can give some examples?
> Anyway, nobody is forced to use it. There are plenty of useful
> programming languages and everybody should use what he thinks most
> suited.
>
>> Regards,
>> --
>>                     ____________________________
>>  Julian Stecklina  /  _________________________/
>
> Ciao,
> - Alex



-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Duane Rettig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <47js6knl6.fsf@franz.com>
[ I've had fun reading this thread.  Though I have not much time to
answer, and I'm close to disappearing for a week for vacation,  I
still want to answer, from a CL implementor's point of view]

Alexander Burger <···@software-lab.de> writes:

> Rainer Joswig <······@lisp.de> wrote:
> > In article <··············@uni-berlin.de>,
> >  Alexander Burger <···@software-lab.de> wrote:
> 
> > > A "real" Lisp Machine would directly execute s-expressions, like what
> > > (I think) the OP Frank Buss has in mind.
> 
> > Then for example OpenMCL (or, say, SBCL) is not a real "Lisp system",
> > because it does not execute s-expressions. OpenMCL compiles everything
> > to machine code. The only real Lisp system
> > would be one that is interpreting s-expressions.
> 
> The term "Lisp system" is all right. From practical reasons, compiling a
> Lisp source to machine code may have some speed advantages, but in
> general I do object to the compilation of Lisp. I tried to explain that
> in
> 
>    http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Burger.pdf

Yes, you make it quite clear there that your largest objection is
to the compilation process.

> The term "Lisp Machine" which suggests a machine _consisting_ of Lisp
> expressions, not _emuating_ them in another language (machine code).

However, you imply in this thread (and state below) that Pico Lisp
truly is a Lisp Machine, but that contradicts the portion of your
paper where you explain (and I agree with) the difference between
"interpretation" and "evaluation" (2.1, paragraph 6).  A true Lisp
Machine would indeed interpret, and would not need to translate
between character glyphs and internal machine representations.
Ironically, your explanation for Lisps (including, I presume, Pico
Lisp) evaluating rather than interpreting is made based on an
argument of performance.  Perhaps you are not quite as unconcerned
about performance as you might think you are.

> > So, I think the concept of a 'real' Lisp Machine is
> > as interesting and practical as 'Pure Lisp'.
> 
> Interesting, yes, and beautiful IMHO.

I think you missed the humor in Rainer's irony; you should do some
Googling for "Pure Lisp" in this group to find discussions on how
to define what Pure Lisp really is, and how hard it has become to
define just what a Lisp is at all....

> But I consider Pico Lisp a real Lisp Machine, though each opcode is
> implemented in C instead of microcode or hard-wires.

Another irony - this one is sad.  You have to resort to a different
language in order to implement your own?  Why not write Pico Lisp _in_
Pico Lisp?  Oh, that's right; you can't, because you would need a
compiler...

> And it is practical, indeed. I use it for almost all my daily work.

I have no doubts that you find your Lisp useful.  I might find it
useful myself, especially in places where we tend to use scripting
languages to do some of the work we do (though more and more we
use our own product, which serves perfectly well as a scripting
language, and which _does_ have an interpreter).

However, you've likely not made many converts here, but it's not
due to what you have to offer, but rather how you are presenting
it.  You present it, even starting with your paper, as if it were
a "Pico Lisp vs Common Lisp, only one should survive" kind of
style.  For example, read your own words at the beginning of
section 2 or your paper:  

  "The (Common-) Lisp community will probably not be enthusiastic
   about Pico Lisp, because it disposes of several traditional Lisp
   beliefs and dogmas."

You then go on to state how wrong we are.

I think if you were to instead present Pico Lisp as a Lisp which
was developed with a different (not necessarily better) set of
criteria, then you would find that much fewer of the people
you've just challenged will have their fists in the air ready for
a fight.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nudqkF4o4d2U1@uni-berlin.de>
Duane Rettig <·····@franz.com> wrote:

> However, you imply in this thread (and state below) that Pico Lisp
> truly is a Lisp Machine, but that contradicts the portion of your
> paper where you explain (and I agree with) the difference between
> "interpretation" and "evaluation" (2.1, paragraph 6).  A true Lisp
> Machine would indeed interpret, and would not need to translate
> between character glyphs and internal machine representations.
> Ironically, your explanation for Lisps (including, I presume, Pico
> Lisp) evaluating rather than interpreting is made based on an
> argument of performance.  Perhaps you are not quite as unconcerned
> about performance as you might think you are.

I think most problems with the different opinions in our discussion
arise from a single fact, which I think I could not make clear.

I have the feeling that most Lisp users see Lisp as a monolithic system,
and they want to use if for any type of (application) programming.
That's why IMHO the language got bloated and complicated.

I tend to a multi-level approach. And each level has a different
philosophy, and thus is optimized according do different objectives.

At the bottom there is the single cell. Cells are used to build numbers,
symbols and lists. This level was designed with simplicity in mind, and
I accept speed disadvantages resulting from that at higher levels.

This is the fixed base, a bit like axioms. At the next level, though, I
do try to get the optimum in execution speed, why not? But only as long
as the fixed base model is retained. Here do also other assumptions
apply, for example writing it all in 'C' (and no longer in assembly as
it used to be in earlier versions). That's an decision against speed,
and for ease and portability.


> > > So, I think the concept of a 'real' Lisp Machine is
> > > as interesting and practical as 'Pure Lisp'.
> > 
> > Interesting, yes, and beautiful IMHO.

> I think you missed the humor in Rainer's irony; you should do some
> Googling for "Pure Lisp" in this group to find discussions on how
> to define what Pure Lisp really is, and how hard it has become to
> define just what a Lisp is at all....

Perhaps also a problem of confusing levels?


> > But I consider Pico Lisp a real Lisp Machine, though each opcode is
> > implemented in C instead of microcode or hard-wires.

> Another irony - this one is sad.  You have to resort to a different
> language in order to implement your own?  Why not write Pico Lisp _in_
> Pico Lisp?  Oh, that's right; you can't, because you would need a
> compiler...

Hm, I'm _using_ a compiler :-) It just happens to be 'C'. Again,
building the system is another level. Don't mix it up.

When you look at the virtual machine model on the boxes-and-pointers-
level, with 'opcodes' as pointers to machine code, it is a perfect view
on a Lisp Machine, I think.


> However, you've likely not made many converts here, but it's not
> due to what you have to offer, but rather how you are presenting
> it.  You present it, even starting with your paper, as if it were
> a "Pico Lisp vs Common Lisp, only one should survive" kind of
> style.  For example, read your own words at the beginning of
> section 2 or your paper:  

>   "The (Common-) Lisp community will probably not be enthusiastic
>    about Pico Lisp, because it disposes of several traditional Lisp
>    beliefs and dogmas."

> You then go on to state how wrong we are.

Sorry. I probably went into a defensive position too early. But I've
been attacked during the last 20 years may times for doing things a bit
different from the mainstream (and also in this thread from some
people). I surely never intended to put any doubt on Common- or other
Lisps usefulness, but was only concerned about the acceptance of a
contrary belief. In fact, I would rather recommend Common Lisp than Pico
Lisp to anybody who just wants to solve his programming problem and get
his application running (documentation, availability etc.). Pico Lisp
covers the range from systems- to application programing (the latter
case being rather specialized) and is more suited for people who like to
go for a vertical approach.


> I think if you were to instead present Pico Lisp as a Lisp which
> was developed with a different (not necessarily better) set of
> criteria, then you would find that much fewer of the people
> you've just challenged will have their fists in the air ready for
> a fight.

Ok. Here I officially take back the "not enthusiastic" statement in that
paper. I agree that it was a bit too provocative. I have to admit that
sometimes it is fun, though, to be provocative. I promise to try to
avoid that in the future :-)


> Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
> 555 12th St., Suite 1450               http://www.555citycenter.com/
> Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Duane Rettig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <4hdr9vfn6.fsf@franz.com>
Alexander Burger <···@software-lab.de> writes:

> Duane Rettig <·····@franz.com> wrote:
> 
> > However, you imply in this thread (and state below) that Pico Lisp
> > truly is a Lisp Machine, but that contradicts the portion of your
> > paper where you explain (and I agree with) the difference between
> > "interpretation" and "evaluation" (2.1, paragraph 6).  A true Lisp
> > Machine would indeed interpret, and would not need to translate
> > between character glyphs and internal machine representations.
> > Ironically, your explanation for Lisps (including, I presume, Pico
> > Lisp) evaluating rather than interpreting is made based on an
> > argument of performance.  Perhaps you are not quite as unconcerned
> > about performance as you might think you are.
> 
> I think most problems with the different opinions in our discussion
> arise from a single fact, which I think I could not make clear.

You still have not made it clear.  If the next sentences state that
fact, you are not making explicit some assumptions that may be obvious
to you, but which are not obvious to us.

> I have the feeling that most Lisp users see Lisp as a monolithic system,
> and they want to use if for any type of (application) programming.
> That's why IMHO the language got bloated and complicated.

What's wrong with a monolithic system?

What's wrong with complicated?  If your language is complicated, it
can handle more complicated problems simply.  If it is simple, you
must then build more structure and complication into your application
in order to handle the same level of complexity in your problem set.

Why do you view CL as bloated?  Have you done any comparisons between
CL and any of several Microsoft products, or Java runtimes or Web
breowsers lately?  What 

> I tend to a multi-level approach. And each level has a different
> philosophy, and thus is optimized according do different objectives.
> 
> At the bottom there is the single cell. Cells are used to build numbers,
> symbols and lists. This level was designed with simplicity in mind, and
> I accept speed disadvantages resulting from that at higher levels.
> 
> This is the fixed base, a bit like axioms. At the next level, though, I
> do try to get the optimum in execution speed, why not? But only as long
> as the fixed base model is retained. Here do also other assumptions
> apply, for example writing it all in 'C' (and no longer in assembly as
> it used to be in earlier versions). That's an decision against speed,
> and for ease and portability.

But if your lisp were able to compile down to the machine-level, then
it could be as efficient as you want it to be, because you control the
compiler, and you would also not have to sacrifice portability.

> > > > So, I think the concept of a 'real' Lisp Machine is
> > > > as interesting and practical as 'Pure Lisp'.
> > > 
> > > Interesting, yes, and beautiful IMHO.
> 
> > I think you missed the humor in Rainer's irony; you should do some
> > Googling for "Pure Lisp" in this group to find discussions on how
> > to define what Pure Lisp really is, and how hard it has become to
> > define just what a Lisp is at all....
> 
> Perhaps also a problem of confusing levels?

No.  You haven't yet done this googling for that discussion, have you?

> > > But I consider Pico Lisp a real Lisp Machine, though each opcode is
> > > implemented in C instead of microcode or hard-wires.
> 
> > Another irony - this one is sad.  You have to resort to a different
> > language in order to implement your own?  Why not write Pico Lisp _in_
> > Pico Lisp?  Oh, that's right; you can't, because you would need a
> > compiler...
> 
> Hm, I'm _using_ a compiler :-) It just happens to be 'C'. Again,
> building the system is another level. Don't mix it up.

Why not?  Why should I accept your view of "levels"?  I maintain a very
successful Common Lisp implementation, and it "mixes up" these levels
to great advantage, thankyouverymuch.

> When you look at the virtual machine model on the boxes-and-pointers-
> level, with 'opcodes' as pointers to machine code, it is a perfect view
> on a Lisp Machine, I think.

You may think, but you haven't convinced me.  And for all your
railing against compilers, especially with the broad definition
of compiler you are showing elsewhere in this and another thread,
you will be forced to the conclusion that Pico Lisp _is_ a compiler.
It _must_ be, if it uses a REPL, since READ is itself a compiler,
in your broad sense.

> > However, you've likely not made many converts here, but it's not
> > due to what you have to offer, but rather how you are presenting
> > it.  You present it, even starting with your paper, as if it were
> > a "Pico Lisp vs Common Lisp, only one should survive" kind of
> > style.  For example, read your own words at the beginning of
> > section 2 or your paper:  
> 
> >   "The (Common-) Lisp community will probably not be enthusiastic
> >    about Pico Lisp, because it disposes of several traditional Lisp
> >    beliefs and dogmas."
> 
> > You then go on to state how wrong we are.
> 
> Sorry. I probably went into a defensive position too early. But I've
> been attacked during the last 20 years may times for doing things a bit
> different from the mainstream (and also in this thread from some
> people). I surely never intended to put any doubt on Common- or other
> Lisps usefulness, but was only concerned about the acceptance of a
> contrary belief. In fact, I would rather recommend Common Lisp than Pico
> Lisp to anybody who just wants to solve his programming problem and get
> his application running (documentation, availability etc.). Pico Lisp
> covers the range from systems- to application programing (the latter
> case being rather specialized) and is more suited for people who like to
> go for a vertical approach.

That's fine.  We've always been very broadminded about suitability of
different systems to different tasks in this newsgroup.

> > I think if you were to instead present Pico Lisp as a Lisp which
> > was developed with a different (not necessarily better) set of
> > criteria, then you would find that much fewer of the people
> > you've just challenged will have their fists in the air ready for
> > a fight.
> 
> Ok. Here I officially take back the "not enthusiastic" statement in that
> paper. I agree that it was a bit too provocative. I have to admit that
> sometimes it is fun, though, to be provocative. I promise to try to
> avoid that in the future :-)

Thank you.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2nv8e7F4uvfnU1@uni-berlin.de>
Duane Rettig <·····@franz.com> wrote:
> Alexander Burger <···@software-lab.de> writes:
> > I think most problems with the different opinions in our discussion
> > arise from a single fact, which I think I could not make clear.

> You still have not made it clear.  If the next sentences state that
> fact, you are not making explicit some assumptions that may be obvious
> to you, but which are not obvious to us.

> What's wrong with a monolithic system?

Inflexible. Difficult to change or adapt. No fun to use. Feeling
powerless when problems arise, you're at the mercy of that system and
its producer (perhaps that's what you want, as you a one).


> What's wrong with complicated?  If your language is complicated, it
> can handle more complicated problems simply.  If it is simple, you

I strongly disagree.

> must then build more structure and complication into your application
> in order to handle the same level of complexity in your problem set.

That is true, but I introduce complexity at the higher levels. This
gives a much mor flexible system.



> Why do you view CL as bloated?  Have you done any comparisons between
> CL and any of several Microsoft products, or Java runtimes or Web
> breowsers lately?  What 

I agree that there are much worse systems. For that I don't use
Microsoft products, and Java only at a bare minimum.

So you say "I'd like to be bad, because others are worse"?



> But if your lisp were able to compile down to the machine-level, then
> it could be as efficient as you want it to be, because you control the
> compiler

Yes, this might be a way, and in fact I did that for other systems. I
tried many ways. But finally I settled for the current one because it
seemed the best to me. I trade some speed for simplicity and
flexibility. And I thought I explained enough why I decided against a
compiler.

> and you would also not have to sacrifice portability.

'C' is relatively portable. Non-portability issues arise at a higher
level (sic), that of operating system calls, for example.


> > > I think you missed the humor in Rainer's irony; you should do some
> > > Googling for "Pure Lisp" in this group to find discussions on how
> > > to define what Pure Lisp really is, and how hard it has become to
> > > define just what a Lisp is at all....
> > 
> > Perhaps also a problem of confusing levels?

> No.  You haven't yet done this googling for that discussion, have you?

No, because I know how such discussions proceed in the usenet. Similar
to the current one. People talk at cross-purposes.

I had a very stressy day. Problems with two different customers, while
having to provide another solution until tomorrow (and posting to this
thread, too). As soon as I have time, I'll take a look.


> > Hm, I'm _using_ a compiler :-) It just happens to be 'C'. Again,
> > building the system is another level. Don't mix it up.

> Why not?  Why should I accept your view of "levels"?  I maintain a very

You are really a system designer, and don't think on a multitude of
levels? You have a flat view of the world? Black and white? Of course,
because there are also only zeroes and ones in a computer, aren't there?

> successful Common Lisp implementation, and it "mixes up" these levels
> to great advantage, thankyouverymuch.

I'd better don't buy Franz Lisp tomorrow. I was just considering to do
that :-)

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Duane Rettig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <44qn9v1rs.fsf@franz.com>
Alexander Burger <···@software-lab.de> writes:

> Duane Rettig <·····@franz.com> wrote:
> > Alexander Burger <···@software-lab.de> writes:
> > > I think most problems with the different opinions in our discussion
> > > arise from a single fact, which I think I could not make clear.
> 
> > You still have not made it clear.  If the next sentences state that
> > fact, you are not making explicit some assumptions that may be obvious
> > to you, but which are not obvious to us.
> 
> > What's wrong with a monolithic system?
> 
> Inflexible. Difficult to change or adapt. No fun to use.

Consistent.  Easy to change or adapt.  Lots of fun to use.

> Feeling
> powerless when problems arise, you're at the mercy of that system and
> its producer (perhaps that's what you want, as you a one).

Since you don't use Common Lisp, how can you say this?  Are you just
guessing?

> > What's wrong with complicated?  If your language is complicated, it
> > can handle more complicated problems simply.  If it is simple, you
> 
> I strongly disagree.

I understand this disagreement.  Actually, I should have changed your
usage from "complicated" to "complex".  CL certainly is a complex system,
but it definitely is not complicated to use.  In fact, one reason for
its instant (in 1984) popularity and for its continued growth has been
its goal of allowing complex goals to be stated simply.

> > must then build more structure and complication into your application
> > in order to handle the same level of complexity in your problem set.
> 
> That is true, but I introduce complexity at the higher levels. This
> gives a much mor flexible system.

Not as much flexibility as you might think, and, as all Lisp systems
go, that flexibility becomes the rope with which you can easily hang
yourself.

> > Why do you view CL as bloated?  Have you done any comparisons between
> > CL and any of several Microsoft products, or Java runtimes or Web
> > browsers lately?  What 
> 
> I agree that there are much worse systems. For that I don't use
> Microsoft products, and Java only at a bare minimum.
> 
> So you say "I'd like to be bad, because others are worse"?

No, I say "I'd like to be good, in spite of others being very much
worse."  How big do you think CL is, anyway?  I am still in a position
to build minimal lisp images with our product that have a less than 1Mb
image, and a runtime memory requirement of 2 Mb.  Certainly not "tiny",
especially by 20-year-old standards, but also certainly not bloated;
there are now cell-phones and robots which have this order of magnitude
of memory capacity.

> > But if your lisp were able to compile down to the machine-level, then
> > it could be as efficient as you want it to be, because you control the
> > compiler
> 
> Yes, this might be a way, and in fact I did that for other systems. I
> tried many ways. But finally I settled for the current one because it
> seemed the best to me. I trade some speed for simplicity and
> flexibility.

I can certainly understand design tradeoffs.  And if you had presented
it that way, you may have had much less disagreement from others.

> And I thought I explained enough why I decided against a compiler.

No, all you've said so far is why compilation is bad.  Without
qualification.  Except when the compiler is C :-)

> > and you would also not have to sacrifice portability.
> 
> 'C' is relatively portable. Non-portability issues arise at a higher
> level (sic), that of operating system calls, for example.

Yes, and the use of manufacturer-supplied header files is a perfectly
good reason to use C for that interface.

> > > > I think you missed the humor in Rainer's irony; you should do some
> > > > Googling for "Pure Lisp" in this group to find discussions on how
> > > > to define what Pure Lisp really is, and how hard it has become to
> > > > define just what a Lisp is at all....
> > > 
> > > Perhaps also a problem of confusing levels?
> 
> > No.  You haven't yet done this googling for that discussion, have you?
> 
> No, because I know how such discussions proceed in the usenet. Similar
> to the current one. People talk at cross-purposes.

Including you; that's precisely what I want to demonstrate to you by
way of you reading that thread.  The reaction you're likely to have
is "How can people disagree on such fundamental aspects of what makes
a Lisp a Lisp?"  And then, when you get into those discussions and see
that people have valid points to make, even while talking at
cross-purposes, and you might hopefully consider that we have also had
valid points to make on this thread, and that the reasons for our
disagreements are due to some of the basic assumptions that we hold which
are different.  In your paper, you state that you are challenging some
of the basic assumptions and myths that Lispers hold.  I say: ho hum,
we've seen these challenges before, and in fact you have not gone far
enough; there are even more challenges that are unspoken in your
paper, and when they become said, you are actually left with the
possibility that even you still have some assumptions and myths that
need challenging.  If you're afraid to be so challenged, then by all
means use the "cross-purposes" excuse not to read the thread to which
I refer.

> I had a very stressy day. Problems with two different customers, while
> having to provide another solution until tomorrow (and posting to this
> thread, too). As soon as I have time, I'll take a look.

Sounds good.  I understand about customer requirements and problem-solving
taking priority.  And in my case, vacation looms large as well...

> > > Hm, I'm _using_ a compiler :-) It just happens to be 'C'. Again,
> > > building the system is another level. Don't mix it up.
> 
> > Why not?  Why should I accept your view of "levels"?  I maintain a very
> 
> You are really a system designer, and don't think on a multitude of
> levels? You have a flat view of the world? Black and white? Of course,
> because there are also only zeroes and ones in a computer, aren't there?

As a system designer, if levels are superfluous, or can be managed with the
same tools, then things become much simpler.  For example, our compiler has
two back-ends; one which allows the "bits" to be assembled (i.e.
direct-executable code-vectors which contain actual machine instructions)
and one which allows an assembler source file to be generated from
the compilation.  The assembler files are then assembled and linked together
with the C O/S-interface files, and an executable run-time is created.
Much of the rest of the lisp is bootstrapped from lisp code built from the
code-vector back-end, which is managed in the lisp heap, and we have a
complete system.  Two (or more) levels, one compiler.

> > successful Common Lisp implementation, and it "mixes up" these levels
> > to great advantage, thankyouverymuch.
> 
> I'd better don't buy Franz Lisp tomorrow. I was just considering to do
> that :-)

Your loss.  :-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Gareth McCaughan
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87fz6tsaac.fsf@g.mccaughan.ntlworld.com>
Alexander Burger wrote:

[Duane asked:]
>> What's wrong with a monolithic system?
> 
> Inflexible. Difficult to change or adapt. No fun to use. Feeling
> powerless when problems arise, you're at the mercy of that system and
> its producer (perhaps that's what you want, as you a one).

Common Lisp is not inflexible. (I don't know of any language
that is less inflexible than Common Lisp. FORTH might come
close.) It is easy to change and adapt. I have a lot of fun
using it. I feel less powerless when problems with CL arise
than I do when problems with most other languages do.

All purely anecdotal, of course, but it suggests that if
you regard what you said above as objections to Common Lisp
then you need to flesh them out a bit.

> > > Hm, I'm _using_ a compiler :-) It just happens to be 'C'. Again,
> > > building the system is another level. Don't mix it up.
> 
> > Why not?  Why should I accept your view of "levels"?  I maintain a very
> 
> You are really a system designer, and don't think on a multitude of
> levels? You have a flat view of the world? Black and white? Of course,
> because there are also only zeroes and ones in a computer, aren't there?

If you are deliberately misunderstanding Duane, then you
should grow up. If you are accidentally misunderstanding
him, then here's a clue: There is a difference between
being unable to understand levels, and finding productive
ways to blur them or mix them together.

-- 
Gareth McCaughan
.sig under construc
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o0vdrF5i3eoU1@uni-berlin.de>
Gareth McCaughan <················@pobox.com> wrote:
> Common Lisp is not inflexible. (I don't know of any language
> that is less inflexible than Common Lisp. FORTH might come
> close.)

FORTH is the right way. For a few years I was implementing and using
languages (Lifo, TeaTime) with a Forth syntax but Lisp internally.

> It is easy to change and adapt. I have a lot of fun
> using it. I feel less powerless when problems with CL arise
> than I do when problems with most other languages do.

Yes, in FORTH you _can_ access and modify the system down to the lowest
levels. You cannot do that in Common Lisp.

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Will Hartung
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o1n55F5q2i7U1@uni-berlin.de>
"Alexander Burger" <···@software-lab.de> wrote in message
···················@uni-berlin.de...
> Gareth McCaughan <················@pobox.com> wrote:
> > Common Lisp is not inflexible. (I don't know of any language
> > that is less inflexible than Common Lisp. FORTH might come
> > close.)
>
> FORTH is the right way. For a few years I was implementing and using
> languages (Lifo, TeaTime) with a Forth syntax but Lisp internally.

While classic FORTH is nice for its extensibility, I don't care for the
static memory model with regards to how the words are stored and referenced.
In the classic FORTH model, you can easily and interactively create and test
words, but you can not (easily) replace previous functionality without
essentially reloading the entire image.

Crude example:

: THIS 1 ;

: THAT 2 ;

: OTHER THIS THAT + . ;

OTHER
3

: THAT 3 ;

OTHER
3

Modern FORTH seems to have a DEFER word, which will give you that behavior,
but it's not the default behavior. Plus, without any real notion of GC,
you'd never reclaim the memory anyway until a reload.

Now, of course, one could fabricate a FORTH-esque system that implicitly
gives you that kind of behavior, but I think by that time you're far enough
away from a FORTH that you would be better off with a Scheme instead (unless
you really like FORTH syntax).

I toyed around with the concept of a native, self-hosting FORTH for the PC,
but then I realized if I went with the classic model, I'd be essentially
restarting the system constantly and that didn't really interest me. The
idea was to get a good foundation to build the system on incrementally and
interactively.

Regards,

Will Hartung
(·····@msoft.com)
From: Pascal Bourguignon
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87oelggoa6.fsf@thalassa.informatimago.com>
"Will Hartung" <·····@msoft.com> writes:
> Modern FORTH seems to have a DEFER word, which will give you that behavior,
> but it's not the default behavior. Plus, without any real notion of GC,
> you'd never reclaim the memory anyway until a reload.
> 
> Now, of course, one could fabricate a FORTH-esque system that implicitly
> gives you that kind of behavior, but I think by that time you're far enough
> away from a FORTH that you would be better off with a Scheme instead (unless
> you really like FORTH syntax).

Or use Postscript. I don't see anything in FORTH better than in Postscript.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfgha6$c82$1@newsreader2.netcologne.de>
Pascal Bourguignon <····@mouse-potato.com> wrote:

> Or use Postscript. I don't see anything in FORTH better than in
> Postscript. 

Postscript is a branch of Forth, so both are very similar:

http://www.levenez.com/lang/history.html

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Siegfried Gonzi
Subject: Re: Lisp in hardware
Date: 
Message-ID: <411B4A90.9D4B8331@kfunigraz.ac.at>
Alexander Burger wrote:

> FORTH is the right way. For a few years I was implementing and using
> languages (Lifo, TeaTime) with a Forth syntax but Lisp internally.

Strange. I am not aware which kind of applications do you actually write,
but let me ask: is your FORTH code re-usable?

I gather from your posts and homepage (honestly speaking: I skimmed over
it not that deeply)  that you are writing business kind of applications.

How do you manage FORTH code? How big are your typical applications?

> Yes, in FORTH you _can_ access and modify the system down to the lowest
> levels. You cannot do that in Common Lisp.
>

I am by no way  a defender of CommonLisp, but what is wrong when you have
to resort to external C code? I mean that should be possible under
CommonLisp - right?

Surely, Scheme is better prepared when you have to deal with external C
code or better said: Schemers do not have any mental problem with that
circumstance.

That said: Why don't you use exclusively use C? I imagine C by itself
comes close to your requirements?

Fensterbrett

>
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o125aF5kne4U1@uni-berlin.de>
Siegfried Gonzi <···············@kfunigraz.ac.at> wrote:
> Alexander Burger wrote:

> > FORTH is the right way. For a few years I was implementing and using
> > languages (Lifo, TeaTime) with a Forth syntax but Lisp internally.

> Strange. I am not aware which kind of applications do you actually write,
> but let me ask: is your FORTH code re-usable?

Real FORTH I did not write since the 80s. Lifo ("Lisp-Forth") was never
used in a commercial applications, but TeaTime (Lifo in Java) got used
in a team of up to 10 people, for about 14 projects, between 1997 and
2001. I never really liked it, and returned to Pico Lisp after that
period. So I would say: "not re-usable".


> I am by no way  a defender of CommonLisp, but what is wrong when you have
> to resort to external C code? I mean that should be possible under
> CommonLisp - right?

Yes. And nothing is wrong with it. I resort to 'C' whenever necessary.


> Surely, Scheme is better prepared when you have to deal with external C
> code or better said: Schemers do not have any mental problem with that
> circumstance.

> That said: Why don't you use exclusively use C? I imagine C by itself
> comes close to your requirements?

No, not at all. You know all the advantages of Lisp, I suppose, like
interactivity and easy-to-use composite data structures. An absolute
must is Lisp for me when it comes to mixing code and data in application
programming.

> Fensterbrett

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Rainer Joswig
Subject: Re: Lisp in hardware
Date: 
Message-ID: <joswig-4D7640.20171112082004@news-50.dca.giganews.com>
In article <·················@kfunigraz.ac.at>,
 Siegfried Gonzi <···············@kfunigraz.ac.at> wrote:

> Alexander Burger wrote:
> 
> > FORTH is the right way. For a few years I was implementing and using
> > languages (Lifo, TeaTime) with a Forth syntax but Lisp internally.
> 
> Strange. I am not aware which kind of applications do you actually write,
> but let me ask: is your FORTH code re-usable?
> 
> I gather from your posts and homepage (honestly speaking: I skimmed over
> it not that deeply)  that you are writing business kind of applications.
> 
> How do you manage FORTH code? How big are your typical applications?
> 
> > Yes, in FORTH you _can_ access and modify the system down to the lowest
> > levels. You cannot do that in Common Lisp.
> >
> 
> I am by no way  a defender of CommonLisp, but what is wrong when you have
> to resort to external C code? I mean that should be possible under
> CommonLisp - right?
> 
> Surely, Scheme is better prepared when you have to deal with external C
> code

Huh? Even though you are saying this again and again, it is wrong.

Scheme standards (IEEE Scheme, or R5RS Scheme) do not say anything
about support for external C code. There are tons of different
ways how various Scheme implementations interface to C.
Some better, some not.

> or better said: Schemers do not have any mental problem with that
> circumstance.

Common Lisp users/implementers also don't seem to have
a problem with that.

GCL and ECL compile to C.
CLISP is written in C.
ACL, LispWorks, MCL, SBCL ... all have some kind of FFI.
There is even an attempt on a universal FFI (UFFI).
 
> That said: Why don't you use exclusively use C? I imagine C by itself
> comes close to your requirements?
> 
> Fensterbrett
> 
> >
From: Siegfried Gonzi
Subject: Re: Lisp in hardware
Date: 
Message-ID: <411C63B5.1500547D@kfunigraz.ac.at>
Rainer Joswig wrote:

>
> Huh? Even though you are saying this again and again, it is wrong.
>
> Scheme standards (IEEE Scheme, or R5RS Scheme) do not say anything
> about support for external C code. There are tons of different
> ways how various Scheme implementations interface to C.
> Some better, some not.

It gets actually subtlier. I have never heard anybody saying that every
CommonLisp implemenation is in a different ballpark.

However, nobody is denying that every Scheme implementation is in a different
leage.


Fensterbrett


>
>
From: ·········@random-state.net
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfi3ua$5pai6$1@midnight.cs.hut.fi>
Siegfried Gonzi <···············@kfunigraz.ac.at> wrote:

> Surely, Scheme is better prepared when you have to deal with external C
> code or better said: Schemers do not have any mental problem with that
> circumstance.

Can you please be more specific? How is Scheme better prepared? What kind
of problems do you perceive in the CL mindset that makes interfacing with
foreign code harder?

I find this statement a bit strange, given that all Common Lisp
implementations I've used have a foreign language interface to C, and the
number of library wrappers people seem to be writing.

Cheers,

 -- Nikodemus                   "Not as clumsy or random as a C++ or Java. 
                             An elegant weapon for a more civilized time."
From: Paolo Amoroso
Subject: Re: Lisp in hardware
Date: 
Message-ID: <87wu04efpd.fsf@plato.moon.paoloamoroso.it>
Alexander Burger <···@software-lab.de> writes:

> I'd better don't buy Franz Lisp tomorrow. I was just considering to do
> that :-)

You couldn't buy it anyway: I gues it has no longer been in stock for
the past couple of decades.


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Recommended Common Lisp libraries/tools (Google for info on each):
- ASDF/ASDF-INSTALL: system building/installation
- CL-PPCRE: regular expressions
- UFFI: Foreign Function Interface
From: Siegfried Gonzi
Subject: Re: Lisp in hardware
Date: 
Message-ID: <411B1443.7812B605@kfunigraz.ac.at>
Alexander Burger wrote:

>
>
> I have the feeling that most Lisp users see Lisp as a monolithic system,
> and they want to use if for any type of (application) programming.
> That's why IMHO the language got bloated and complicated.

What do you think of  "Scheme" in that respect?

Fensterbrett
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o0r69F5k2bcU2@uni-berlin.de>
Siegfried Gonzi <···············@kfunigraz.ac.at> wrote:
> Alexander Burger wrote:
> > I have the feeling that most Lisp users see Lisp as a monolithic system,
> > and they want to use if for any type of (application) programming.
> > That's why IMHO the language got bloated and complicated.

> What do you think of  "Scheme" in that respect?

I've never used Scheme, and can only refer to what I've heard.
But from that I feel it is a big step in the right direction.

> Fensterbrett

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Siegfried Gonzi
Subject: Re: Lisp in hardware
Date: 
Message-ID: <411B3E60.BFE34813@kfunigraz.ac.at>
Alexander Burger wrote:

> I've never used Scheme, and can only refer to what I've heard.
> But from that I feel it is a big step in the right direction.

I do not understand the "why"; why  are you still hurting yourself with Lisp?

I mean tools like "Bigloo" would likely better suit you.

Fensterbrett

>
>
From: Alexander Burger
Subject: Re: Lisp in hardware
Date: 
Message-ID: <2o0uhnF4vh3dU1@uni-berlin.de>
Siegfried Gonzi <···············@kfunigraz.ac.at> wrote:
> Alexander Burger wrote:

> > I've never used Scheme, and can only refer to what I've heard.
> > But from that I feel it is a big step in the right direction.

> I do not understand the "why"; why  are you still hurting yourself with Lisp?

?? Where is a "why"

> I mean tools like "Bigloo" would likely better suit you.

Pico Lisp suits me :-)

- Alex
-- 
   Software Lab. Alexander Burger
   Bahnhofstr. 24a, D-86462 Langweid
   ···@software-lab.de, http://www.software-lab.de, +49 821 9907090
From: Duncan Entwisle
Subject: Re: Lisp in hardware
Date: 
Message-ID: <family*.*entwisle*-33D584.14263108082004@newstrial.btopenworld.com>
In article <············@newsreader2.netcologne.de>,
 Frank Buss <··@frank-buss.de> wrote:

> I want to implement a processor core in a Xilinx FPGA 
> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 
> be possible to do something like Pico Lisp, which is not a normal compiler, 
> but more an interpreter:

I'm also considering implementing an Lisp interpreter in hardware. I 
appreciate that it's probably not going to be especially efficient, but 
I'm sure I'll have fun and learn along the way :-) I'm planning on using 
the Xilinx Spartan-3 starter kit.

Two papers that (to my non-expert eyes, on superficial reading :-) 
implement hardware interpreters for a LISP are:-

AI Memo No. 514:
"Design of LISP-Based Processors or, SCHEME: A Dielectric LISP or, 
Finite Memories Considered Harmful or, LAMBDA: The Ultimate Opcode"
Guy Lewis Steele Jr. and Gerald Jay Sussman.

AI Memo No. 1040:
"Scheme86: A System for Interpreting Scheme"
Andrew A. Berlin and Henry M. Wu.

(I can probably hunt down the exact URL I found those papers if you 
really need them, but I don't have them to hand, and they're not too 
well hidden.)

From reading about the Lisp Machine, it seems to implement an 
architecture that is geared towards executing compiled Lisp code 
efficiently.

Anyway, good luck with your project, I hope you enjoy it :-)

Duncan.
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cf7q8k$7vn$1@newsreader2.netcologne.de>
Duncan Entwisle <··················@*btinternet*.*com> wrote:

> I'm also considering implementing an Lisp interpreter in hardware. I 
> appreciate that it's probably not going to be especially efficient,
> but I'm sure I'll have fun and learn along the way :-) I'm planning on
> using the Xilinx Spartan-3 starter kit.

I already have a Spartan-3 starter kit, too, and writing hardware 
descriptions in Verilog is not much more difficult than writing programs 
in C, so it should be easy to implement it with it.

> AI Memo No. 514:
> "Design of LISP-Based Processors or, SCHEME: A Dielectric LISP or, 
> Finite Memories Considered Harmful or, LAMBDA: The Ultimate Opcode"
> Guy Lewis Steele Jr. and Gerald Jay Sussman.

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf

This is exactly the kind of CPU I was thinking of! The article describes 
a simple version of Lisp and a "hardware evaluator", which executes a 
binary represantation of s-expressions. The last half of the article 
describes the implementation (given some background information how long 
various details took and who does it, it was a university project) down 
to the gate level and placing on the chip area. Fortunately I don't need 
to deal with this details, because the Xilinx tools does the chip 
synthesis from Verilog without manual help, so I can focus on the 
functionality.

> AI Memo No. 1040:
> "Scheme86: A System for Interpreting Scheme"
> Andrew A. Berlin and Henry M. Wu.

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1040.pdf

not very detailed, but with interesting new ideas. Partly based on the 
previous article.

I think now I have enough information to implement a little Verilog Lisp 
evaluator, which I'll show in a new thread. Thanks for all your comments.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Joe Marshall
Subject: Re: Lisp in hardware
Date: 
Message-ID: <brhk4fjt.fsf@ccs.neu.edu>
Frank Buss <··@frank-buss.de> writes:

> But first I want to know how it was solved in other systems, like the 
> Symbolics Lisp Machine. Where can I find a hardware description of the 
> processor? Any other resources I should read?

Lisp Machine Progress Report - AI memo 444 (1977)
Alan Bawden, Richard Greenblatt, Jack Holloway, Thomas Knight, David Moon, Daniel Weinreb 

Design of LISP-Based Processors[pdf] - AI memo 514 (1979)
Guy Lewis Steele Jr. and Gerald Jay Sussman 

CADR - AI memo 528 (1979)
Thomas F. Knight, Jr., David Moon, Jack Holloway, and Guy L. Steele, Jr. 

Symbolic language data processing system - US Patent No. 4887235
John T. Holloway, David A. Moon, Howard I. Cannon, Thomas F. Knight, Bruce E. Edwards, Daniel L. Weinreb 

Scheme 86 - An Architecture for Microcoding a Scheme Interpreter - AI memo 953 (1988)
Henry M. Wu 

Lisp Machine, Inc. K-machine 
From: Nick Patavalis
Subject: Re: Lisp in hardware
Date: 
Message-ID: <slrnci23uu.h3l.npat@gray.efault.net>
On 2004-08-09, Joe Marshall <···@ccs.neu.edu> wrote:
>
> Lisp Machine, Inc. K-machine 
>

A few hours ago I read your description of the K-machine
architecture. Excellent article, thanks! Apart from all the valuable
technical insights, it made me sort of "nostalgic" for the days when
you could actually find ALU ICs in AMD's catalog (or in any IC
manufacturer's catalog for that matter), and when register files were
implemented by discrete SRAM components. Do you happen remember what
packaging the AMD 29332 came into?

Thanks again
/npat
From: Joe Marshall
Subject: Re: Lisp in hardware
Date: 
Message-ID: <n00ukdr8.fsf@ccs.neu.edu>
Nick Patavalis <····@efault.net> writes:

> On 2004-08-09, Joe Marshall <···@ccs.neu.edu> wrote:
>>
>> Lisp Machine, Inc. K-machine 
>>
>
> A few hours ago I read your description of the K-machine
> architecture. Excellent article, thanks! Apart from all the valuable
> technical insights, it made me sort of "nostalgic" for the days when
> you could actually find ALU ICs in AMD's catalog (or in any IC
> manufacturer's catalog for that matter), and when register files were
> implemented by discrete SRAM components. Do you happen remember what
> packaging the AMD 29332 came into?

It was a PGA.  You had to put a bypass cap under it.
From: neo88
Subject: Re: Lisp in hardware
Date: 
Message-ID: <6a73bb68.0408111226.27c3e48b@posting.google.com>
Frank Buss <··@frank-buss.de> wrote in message news:<············@newsreader2.netcologne.de>...
> I want to implement a processor core in a Xilinx FPGA 
> (http://www.jroller.com/page/fb), which can execute Lisp. I think it should 

Do you have a english version of that?
This sounds very interstiing, I just wish I could read what you have
on your blog. Thanks.

-- 
May the Source be with you.
neo88 (Philip Haddad)
From: Frank Buss
Subject: Re: Lisp in hardware
Date: 
Message-ID: <cfe0et$hld$1@newsreader2.netcologne.de>
······@truevine.net (neo88) wrote:

> Do you have a english version of that?
> This sounds very interstiing, I just wish I could read what you have
> on your blog. Thanks.

Looks like Google does a good job translating it (but missing the last
blog entry). Not translated: Telespiel = video game. 

http://translate.google.com/translate?u=http%3A%2F%2Fwww.jroller.com%2Fpage%2Ffb&langpair=de%7Cen&hl=de&ie=UTF-8&ie=UTF-8&oe=UTF-8&prev=%2Flanguage_tools 

I'll write additional documentation for my Lisp CPU project in English.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: neo88
Subject: Re: Lisp in hardware
Date: 
Message-ID: <6a73bb68.0408111814.38a3715b@posting.google.com>
Frank Buss <··@frank-buss.de> wrote in message news:<············@newsreader2.netcologne.de>...
> ······@truevine.net (neo88) wrote:
> 
> > Do you have a english version of that?
> > This sounds very interstiing, I just wish I could read what you have
> > on your blog. Thanks.
> 
> Looks like Google does a good job translating it (but missing the last
> blog entry). Not translated: Telespiel = video game. 
> 
> http://translate.google.com/translate?u=http%3A%2F%2Fwww.jroller.com%2Fpage%2Ffb&langpair=de%7Cen&hl=de&ie=UTF-8&ie=UTF-8&oe=UTF-8&prev=%2Flanguage_tools 
> 
> I'll write additional documentation for my Lisp CPU project in English.

Excellent! Thanks! Looking forward to seeing what you come up with :-)

-- 
May the Source be with you.
neo88 (Philip Haddad)
From: Julian Stecklina
Subject: Re: Lisp in hardware
Date: 
Message-ID: <86ekmdz1l4.fsf@goldenaxe.localnet>
Frank Buss <··@frank-buss.de> writes:

> Looks like Google does a good job translating it (but missing the last
> blog entry). Not translated: Telespiel = video game. 

No wonder that Google's dictionary does not know this. "Telespiel"
reminds me of the ColecoVision and the neat tennis it had. ;)
"Videospiel" has replaced this term for at least a decade.

Regards,
-- 
                    ____________________________
 Julian Stecklina  /  _________________________/
  ________________/  /
  \_________________/  LISP - truly beautiful