From: Andrew Le Couteur Bisson
Subject: Lisp machines
Date: 
Message-ID: <bmhl4o$sur$1@sparta.btinternet.com>
I've seen a lot of interest in Lisp machines on this group.  Has it occurred
to anyone that
a powerful modern Lisp machine can now be built for around UKP500-1000 using
programmable logic?  It would require some particularly gifted people to
make a good
job of it but it would also be an extemely interesting project to be
involved in.
You can put multiple high-speed 32 risc machines on a single chip so I guess
that a
fast Lisp machine with hardware garbage collection would be feasible....

Andy

From: Joe Marshall
Subject: Re: Lisp machines
Date: 
Message-ID: <ptgzefri.fsf@ccs.neu.edu>
"Andrew Le Couteur Bisson" <·····@btinternet.com> writes:

> I've seen a lot of interest in Lisp machines on this group.  Has it
> occurred to anyone that a powerful modern Lisp machine can now be
> built for around UKP500-1000 using programmable logic?

Yes.

> It would require some particularly gifted people to make a good job
> of it but it would also be an extemely interesting project to be
> involved in.  You can put multiple high-speed 32 risc machines on a
> single chip so I guess that a fast Lisp machine with hardware
> garbage collection would be feasible....

Let me know when you get a backer.
From: Andrew Le Couteur Bisson
Subject: Re: Lisp machines
Date: 
Message-ID: <bmhmm7$7if$1@titan.btinternet.com>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...
> "Andrew Le Couteur Bisson" <·····@btinternet.com> writes:
>
> Let me know when you get a backer.

Hmm!  When I spoke of gifted people I didn't include myself    : )
An 8 bit micro-controller is more my size!

Incidentally, I presume that there has been little work done on the hardware
implementation of Lisp for a while.  It would be interesting to see what
modern
design methods and analysis could bring to the design of a new Lisp machine.

Andy
From: Christopher C. Stacy
Subject: Re: Lisp machines
Date: 
Message-ID: <uismrwksw.fsf@dtpq.com>
>>>>> On Tue, 14 Oct 2003 20:35:20 +0000 (UTC), Andrew Le Couteur Bisson ("Andrew") writes:
 Andrew> Incidentally, I presume that there has been little work done
 Andrew> on the hardware implementation of Lisp for a while.  It would
 Andrew> be interesting to see what modern design methods and analysis
 Andrew> could bring to the design of a new Lisp machine.

The interesting thing about the Lisp Machine was its software
The hardware was only necessary for acceleration because there
were no suitably powerful processors available a quarter of a
century ago.  Today's 64-bit processors are plenty fast enough.
The last version of the Lisp Machine that was sold was a 
software-based solution that used the DEC Alpha workstation.
(The DEC Alpha being the fastest, or only, 64 bit CPU in 1992.)
From: ·············@comcast.net
Subject: Re: Lisp machines
Date: 
Message-ID: <7k375mof.fsf@comcast.net>
······@dtpq.com (Christopher C. Stacy) writes:

>>>>>> On Tue, 14 Oct 2003 20:35:20 +0000 (UTC), Andrew Le Couteur Bisson ("Andrew") writes:
>  Andrew> Incidentally, I presume that there has been little work done
>  Andrew> on the hardware implementation of Lisp for a while.  It would
>  Andrew> be interesting to see what modern design methods and analysis
>  Andrew> could bring to the design of a new Lisp machine.
>
> The interesting thing about the Lisp Machine was its software
> The hardware was only necessary for acceleration because there
> were no suitably powerful processors available a quarter of a
> century ago.  Today's 64-bit processors are plenty fast enough.

Oh, I don't know.  My guess is you could get a hundredfold speedup
with special hardware.

On the other hand, the fastest FPGAs are about 1/10th the speed
of the fastest processors, so the net gain would be much less
impressive.
From: Frank A. Adrian
Subject: Re: Lisp machines
Date: 
Message-ID: <IQ2jb.1239$CL.56752@news.uswest.net>
·············@comcast.net wrote:

> 
> Oh, I don't know.  My guess is you could get a hundredfold speedup
> with special hardware.

I don't think so.  AFAICS, the only advantage would be to process tags in
parallel with other operations and maybe some better efficiencies in stack
support and (possibly, but unlikely) GC and memory access.  The problem is
that other than tag checking, a lot of the most freqently executed ops
(object slot access, arith/logical/test ops) look a lot like RISC
instructions anyway.  Another issue is that unless you spent a bunch of
chip on speculative execution and the memory system (as well as compiler
support for the same), you don't necessarily have enough work between
calls/branches to get a lot of speed out of pipelining, either.  In short,
unless it was a system with VERY advanced design, it wouldn't perform as
well as an implementation with well-tuned tag architecture on a normal
processor.

> On the other hand, the fastest FPGAs are about 1/10th the speed
> of the fastest processors, so the net gain would be much less
> impressive.

See above for why you might get away with a chip that executed Lisp at about
1/2 the speed of a well-tuned Lisp implementation for a standard processor.

faa
From: ·············@comcast.net
Subject: Re: Lisp machines
Date: 
Message-ID: <ad8299w3.fsf@comcast.net>
"Frank A. Adrian" <·······@ancar.org> writes:

> ·············@comcast.net wrote:
>
>> 
>> Oh, I don't know.  My guess is you could get a hundredfold speedup
>> with special hardware.
>
> I don't think so.  AFAICS, the only advantage would be to process tags in
> parallel with other operations and maybe some better efficiencies in stack
> support and (possibly, but unlikely) GC and memory access.  The problem is
> that other than tag checking, a lot of the most freqently executed ops
> (object slot access, arith/logical/test ops) look a lot like RISC
> instructions anyway.  Another issue is that unless you spent a bunch of
> chip on speculative execution and the memory system (as well as compiler
> support for the same), you don't necessarily have enough work between
> calls/branches to get a lot of speed out of pipelining, either.  In short,
> unless it was a system with VERY advanced design, it wouldn't perform as
> well as an implementation with well-tuned tag architecture on a normal
> processor.

Take a look at Henry Wu's thesis at
    http://www.swiss.ai.mit.edu/~mhwu/scheme86/scheme86-home.html

and I think that it would be possible to use dynamic optimization
techniques to optimize across call boundaries.

Additionally, you would want to implement things like MEMQ and ASSQ
in the hardware.

>> On the other hand, the fastest FPGAs are about 1/10th the speed
>> of the fastest processors, so the net gain would be much less
>> impressive.
>
> See above for why you might get away with a chip that executed Lisp at about
> 1/2 the speed of a well-tuned Lisp implementation for a standard processor.

I think the *real* problem is the front end.  Consider putting one of
these FPGAs on a PCI bus.  The communication between the bus and RAM
is glacial.  You'd have to have one hell of a calculation for it to
be worth the time to download the problem then upload the solution.
From: Frank A. Adrian
Subject: Re: Lisp machines
Date: 
Message-ID: <Lwdjb.448$HL.24124@news.uswest.net>
·············@comcast.net wrote:

> I think the real problem is the front end.  Consider putting one of
> these FPGAs on a PCI bus.  The communication between the bus and RAM
> is glacial.  You'd have to have one hell of a calculation for it to
> be worth the time to download the problem then upload the solution.

Well, as you said previously, this is a Lisp Machine.  You'd need to have a
custom interface between the processor and the RAM - no PCI mediation
needed.  If you did want to do it as an add-on card to plug into stock
hardware, you'd put the Lisp function's main memory on the card and only
use the bus for paging.

faa
From: Rayiner Hashem
Subject: Re: Lisp machines
Date: 
Message-ID: <bmiruh$pal$1@news-int2.gatech.edu>
> Oh, I don't know.  My guess is you could get a hundredfold speedup
> with special hardware.

Hmm. How do you see that? If Lisp is 90% the speed of C, you're saying that
a specialized Lisps processor would be 90x faster than C on a regular
processor? Sounds a bit hard to believe. 

From my benchmarks, the worst case for Lisp (aside from maybe pathological
GC behavior) is probably about 10x slower than C for similar operations.
You'd hit this case for inner-loopy, numeric code running with full dynamic
dispatch in Lisp (say your compiler didn't infer the types, and didn't have
tagged integers) while the C code was statically typed and inlined.

My guess is that the processor's ability to speculatively execute the proper
function, without waiting on the type test, is responsible for the fairly
decent performance of fully dynamic dispatch. The real overhead comes not
from the type manipulation, but the inability to inline the target
function. Modern processors (especially my P4) pay tremendous penalties for
branches.

If you were going to design a "Lisp CPU" I'd say that tag-manipulation
acceleration and array bounds-checking probably wouldn't be important
features. Most modern CPUs have more integer units than they can keep busy,
so farming off that work to them won't be a big hit. More important would
be some sort of on-CPU JIT (think P4 trace cache on steroids) that would
negate the performance impact of not being able to inline generic
functions. Also useful would be some scheme to minimize the performance
impact of indirect jumps. In one of my tests, I found that it was actually
3x faster to do mono-dispatch through a 1200 element switch statement than
to use a dispatch table of function-pointers. This feature would be helpful
in code that uses functions as first-class values. Beyond that, some of the
technologies that integrate dozens of simple processing elements into RAM
might be useful in relieving the memory bandwidth pressure that comes from
all the copies promoted by a functional style of programming. 

Note: In my post, I use "generic functions" to refer to generic dispatch in
general, not just CLOS generic functions. Thus, '+' and '-' would be
generic functions, even though they are usually special-cased in Lisp
implementations for types like integer.
From: Joe Marshall
Subject: Re: Lisp machines
Date: 
Message-ID: <brsi36u3.fsf@ccs.neu.edu>
Rayiner Hashem <·······@mail.gatech.edu> writes:

>> Oh, I don't know.  My guess is you could get a hundredfold speedup
>> with special hardware.
>
> Hmm. How do you see that? If Lisp is 90% the speed of C, you're saying that
> a specialized Lisps processor would be 90x faster than C on a regular
> processor? Sounds a bit hard to believe. 

Well, the proof of the pudding is in the tasting.

People have not yet given up on language-specific hardware, and were I
desigining language-specific machine I wouldn't restrict myself to
those features that are only easy to implement on stock hardware (e.g.
forwarding pointers and external value cell pointers in Zetalisp)

The fact is that stock hardware these days is actually software
that emulates the x86 instruction set.  Removing that layer of
interpretation ought to improve performance.

Unfortunately for me, you can't prove a negative, so the onus would be
on me to demonstrate working hardware (then you'd be playing catch-up
with the software version). 
From: Christian Lynbech
Subject: Re: Lisp machines
Date: 
Message-ID: <87k776cz7q.fsf@dhcp229.ted.dk.eu.ericsson.se>
>>>>> "Joe" == Joe Marshall <···@ccs.neu.edu> writes:

Joe> Rayiner Hashem <·······@mail.gatech.edu> writes:
>>> Oh, I don't know.  My guess is you could get a hundredfold speedup
>>> with special hardware.
>> 
>> Hmm. How do you see that? If Lisp is 90% the speed of C, you're saying that
>> a specialized Lisps processor would be 90x faster than C on a regular
>> processor? Sounds a bit hard to believe. 

Joe> Well, the proof of the pudding is in the tasting.

Also we should not forget that the apparent success of stock hardware
over specialised hardware, as in the Symbolics machines, owes more to
the volume in stock hardware leading to bigger profits which again
allows a faster pace of development.

In its days, lisp machines were significantly better at executing lisp
than stock hardware and I see no reason why symbolics wouldn't have
been able to stay ahead if it have had as big a wallet as intel has.


------------------------+-----------------------------------------------------
Christian Lynbech       | christian ··@ defun #\. dk
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
                                        - ·······@hal.com (Michael A. Petonic)
From: Barry Margolin
Subject: Re: Lisp machines
Date: 
Message-ID: <9Hdjb.45$lK3.6@news.level3.com>
In article <··············@dhcp229.ted.dk.eu.ericsson.se>,
Christian Lynbech  <·················@ericsson.com> wrote:
>In its days, lisp machines were significantly better at executing lisp
>than stock hardware and I see no reason why symbolics wouldn't have
>been able to stay ahead if it have had as big a wallet as intel has.

IIRC, the machines that really killed the Lisp Machines were Sun and SGI.
Neither of them used Intel processors, although those companies certainly
did have much bigger wallets than Symbolics.  I think they won because they
ran commodity Unix software and there were *also* decent Lisp
implementations available for them, so you could do in one box what you
would otherwise need two for (and desktop Suns were less expensive than
Lisp Machines).

As long as Lisp Machines remained a niche product, I don't think they could
ever hope to compete with Unix-based workstations at that time.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Rayiner Hashem
Subject: Re: Lisp machines
Date: 
Message-ID: <bmk83s$kn7$1@news-int.gatech.edu>
> In its days, lisp machines were significantly better at executing lisp
> than stock hardware and I see no reason why symbolics wouldn't have
> been able to stay ahead if it have had as big a wallet as intel has.
On the other hand, those previous general-purpose processors were also much
simpler. They didn't have a lot of the special hardware (several parallel
integer units, speculative reads, writes, and execution, etc) that make
Lisp faster on modern hardware. 

You could get a pretty good speed-up (especially in situations where static
optimization by the compiler doesn't help) by implementing some of those
features I talked about in my previous post, but such a processor would
speed up not just Lisp, but C++/Java/etc code as well. I think the whole
thing comes down to the fact that the aspects of the Lisp machine model
that differ from languages like Java or C# (for example, tagged integers)
aren't really a big bottleneck on modern hardware, while the aspects that
*are* big bottlenecks (indirect objects, branches, etc) are common to other
modern languages.
From: Vladimir S.
Subject: Re: Lisp machines
Date: 
Message-ID: <8765iqmj0z.fsf@shawnews.cg.shawcable.net>
Rayiner Hashem <·······@mail.gatech.edu> writes:

> > Oh, I don't know.  My guess is you could get a hundredfold speedup
> > with special hardware.
> 
> Hmm. How do you see that? If Lisp is 90% the speed of C, you're saying that
> a specialized Lisps processor would be 90x faster than C on a regular
> processor? Sounds a bit hard to believe. 

Well, according to some benchmarks Alan Kay[1] made, even though
there's been something like 100x (sorry, I can't remember the exact
numbers) speedup in CPU clock speed and memory bandwidth between the
current Intel processors and the top-of-the-line D-machine (again,
forgot which one - watch the video), the Intel processor executes
Smalltalk bytecode only 20x as fast - so the current architecture
computers can be said to be 50x (this number I'm pretty sure of)
slower than the corresponding microcoded architecture with specific
support for certain dynamic language features.

Granted, executing Smalltalk bytecode and compiled Lisp code are two
very different things, but I think the trade-off in simplifying the
compiler (I think the CMUCL Python optimizing compiler weighs in
around 10-15 megabytes, which all hangs around in memory, and does all
the work in SBCL, making run-time function creation expensive)
vs. changing the CPU architecture (presumably to a general purpose
microcoded one, so it might very well be the case that you'll simplify
that significantly as well) is a worthwhile one. And of course, it
might very well be possible that a smaller, but noticeable performance
increase in statically compiled languages will result (nobody said the
x86 is a particularly good instruction set for that).

The big benefit one would gain from this simplification (as well as
changing other aspects of the machine architecture - why not go all
out while you're at it?) is in systems programming - one person in
this thread already managed to propose a Lisp OS on mass-produced
hardware, but doing systems programming with it is not exactly easy.

The big thing about this mythical processor IMO sh/would be
microprogramming. If it can accelerate the JVM significantly, there's
a market for it, and languages like Lisp and Smalltalk can
piggyback. As someone else has pointed out, the Intel P6-series is
microcoded[2], so this idea isn't as silly as it sounds.

> From my benchmarks, the worst case for Lisp (aside from maybe pathological
> GC behavior) is probably about 10x slower than C for similar operations.
> You'd hit this case for inner-loopy, numeric code running with full dynamic
> dispatch in Lisp (say your compiler didn't infer the types, and didn't have
> tagged integers) while the C code was statically typed and inlined.

Well, I think the main benefit special hardware would bring would be
make the not-so-similar operations (think acceleration for shallow
binding of dynamic variables, multiple return values, etc.) really
fast, while completely removing the worst-case penalty for the
"similar" operations. Not to mention the benefits it might bring to
the crazy GC.

-Vladimir

1 - http://murl.microsoft.com/LectureDetails.asp?1019
2 - http://www.urbanmyth.org/microcode/
From: Pascal Bourguignon
Subject: Re: Lisp machines
Date: 
Message-ID: <878ynnm9wh.fsf@thalassa.informatimago.com>
"Andrew Le Couteur Bisson" <·····@btinternet.com> writes:

> I've seen a lot of interest in Lisp machines on this group.  Has it
> occurred to anyone that a powerful modern Lisp machine can now be
> built for around UKP500-1000 using programmable logic?  It would
> require some particularly gifted people to make a good job of it but
> it would also be an extemely interesting project to be involved in.
> You can put multiple high-speed 32 risc machines on a single chip so
> I guess that a fast Lisp machine with hardware garbage collection
> would be feasible....
> 
> Andy

IMHO, this is not a hardware problem.  (We have powerful enough
off-the-shelf hardware). Just start a new LISP OS!

-- 
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
From: Paolo Amoroso
Subject: Re: Lisp machines
Date: 
Message-ID: <874qya32ki.fsf@plato.moon.paoloamoroso.it>
Andrew Le Couteur Bisson writes:

> I've seen a lot of interest in Lisp machines on this group.  Has it occurred
> to anyone that
> a powerful modern Lisp machine can now be built for around UKP500-1000 using
> programmable logic?  It would require some particularly gifted people to
> make a good
> job of it but it would also be an extemely interesting project to be
> involved in.

I have just put together a brand new PC stuffed with a lot of metal
(ASUS P4C800 Deluxe motherboard, 2.8 GHz Pentium 4, 2 GB of RAM). It
takes this machine about 10 minutes to fully build SBCL from source
with CMUCL, which is not bad.

By the time such a Lisp Machine would be available, general purpose
hardware would reduce SBCL compilation time to a matter of
seconds. General purpose hardware is fast--and relatively
affordable--enough for Lisp even now.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>