From: John W.F. McClain
Subject: Ideas for new LISP Machines
Date: 
Message-ID: <1992Apr1.033547.20514@athena.mit.edu>
What is the general felling on purpose built LISP machines.  Ignoring
marketing issues do they make sense from a technical viewpoint, or is
it more effective to take a RS6000 or a DECstation and write a virtual
LISP machine.

If one took the hardware approach what features should be implemented
in hardware to increase LISP performance, make it more robust,
maintainable, or flexible.

How would these features affect other languages (ie C, FORTRAN, COBOL,
smalltalk...)

How would these features affect pipelining, mult-issue, etc. What
other costs/benefits would they have.

Another way to look at this question is what features of today's
architectures makes the implementation a LISP system hard.

John W.F. McClain
····@athena.mit.edu

From: Eliot Moss
Subject: Re: Ideas for new LISP Machines
Date: 
Message-ID: <MOSS.92Apr1102048@ibis.cs.umass.edu>
The question was (paraphrased): what about special hardware for executing LISP
(or perhaps other languages) versus general purpose architectures?

Here are a couple of personal reactions:

The economics are against the special purpose hardware since it is not popular
enough for it to keep really close to the general purpose architectures, so it
lags in performance and price/performance. So, even though a special purpose
architecture might have some performance benefit, the general purpose
architecture has an advantage to being pushed harder and thus in practice
wiping out a lot of the potential technical advantage to a special
architecture.

Special purpose hardware makes it harder to support a wide range of services
that people want from their computers (all those great OS things and
applications, such as mail, networking, etc.). On a general purpose
architecture it is easier to justify the investment and thus provide the total
functionality customers want.

On the other hand, it could be that because of benchmarking, and the languages
used to write many popular applications, the relative performance penalty
(e.g., C versus LISP) for LISP and friends is getting worse, not the least
because the architectures are being tuned to C and related languages. Still,
architects seem to pay at least a *little* attention to the needs of other
languages, and the RISC approach reduces some of the commitment to particular
approaches (e.g., the VAX call instructions are biased towards one way of
doing stacks, whereas a RISC allows you do whatever you want, at the risk of
making it harder to interface with other software). Whether the performance
penalty has been changing would be an interesting thing to investigate, but so
much depends on the language implementation techniques that it is hard to
separate out the architectural effects.

For the most part, though, I think LISP, Smalltalk, etc. hardware is dead for
the economic and market reasons touched on above. Adding some minor features
to general purpose architectures seems to be the farthest one should
rationally go in the face of market forces today and in the foreseeable
future.
--

		J. Eliot B. Moss, Assistant Professor
		Department of Computer Science
		Lederle Graduate Research Center
		University of Massachusetts
		Amherst, MA  01003
		(413) 545-4206, 545-1249 (fax); ····@cs.umass.edu
From: Torben AEgidius Mogensen
Subject: Re: Ideas for new LISP Machines
Date: 
Message-ID: <1992Apr2.095632.11709@odin.diku.dk>
····@m4-035-4.MIT.EDU (John W.F. McClain) writes:

>What is the general felling on purpose built LISP machines.  Ignoring
>marketing issues do they make sense from a technical viewpoint, or is
>it more effective to take a RS6000 or a DECstation and write a virtual
>LISP machine.

The problem with special purpose LISP architectuers is that they tend
to use a technology that is a few years behind the "hottest" RISC
processors. THis means that clever implementations on RISC processors
tend to be faster or at least as fast as implementations on dedicated
LISP hardware. I would expect that, with the present compiler
technology, a dedicated LISP architecture using state-of-the-art
design and fabrication technology would be 20-50% faster than LISP on
RISC processors.

>If one took the hardware approach what features should be implemented
>in hardware to increase LISP performance, make it more robust,
>maintainable, or flexible.

One aspect is the tagged values used in LISP. Previous LISP machines
has focused on this. Another aspect is the memory interface: LISP tend
to use its memory in different patterns than C or similar languages.
Heap access tend to be less local than stack access, and garbage
collection is more of an issue in LISP than in C ;-). Write through
caches will probably stall more on LISP than on C, so a write back
cache or a write buffer will probably help. Good MMUs will help GC, as
they can trap when the heap is full and generally assist in the
marking of used memory etc. Andrew Appel has written about this
subject. The MMU built into the ARM600 processor has several features
supporting GC and heaps in general.

>How would these features affect other languages (ie C, FORTRAN, COBOL,
>smalltalk...)

Most features that help LISP will also help smalltalk (especially
those concerned with the memory interface). I don't think they would
either help or harm the other langauges, except by using silicon which
might be used to greater advantage by these languages.

>How would these features affect pipelining, mult-issue, etc. What
>other costs/benefits would they have.

>Another way to look at this question is what features of today's
>architectures makes the implementation a LISP system hard.

Hardly any.

	Torben Mogensen (·······@diku.dk)
From: Ice
Subject: Prolog machines? WAS: Re: Ideas for new LISP Machines
Date: 
Message-ID: <1992Apr03.183135.1725@skynet.uucp>
Pursuant to this, I am wondering if there exists hardware
design to beef up a Prolog system. Prolog is very slow, but
the theory of nonprocedural programming is intuitively beautiful.

-- 
/* Ice's Hypermedia Sig */ #include <cyberpunk.h> #include <industrial.h> 
Hardware required: biological neural net with _unsupervised_learning_
Audio() Burning Inside by Ministry; "The Mind is a Terrible Thing to Taste" 
Visual() Sarah Connor's flesh on fire blasted away leaving screaming skeleton 
From: R. Kym Horsell
Subject: Re: Prolog machines? WAS: Re: Ideas for new LISP Machines
Date: 
Message-ID: <1992Apr4.170111.5314@newserve.cc.binghamton.edu>
In article <·····················@skynet.uucp> ···@skynet.uucp (Ice) writes:
>Pursuant to this, I am wondering if there exists hardware
>design to beef up a Prolog system. Prolog is very slow, but
                                    ^^^^^^^^^^^^^^^^^^^
>the theory of nonprocedural programming is intuitively beautiful.

This is a little ``inaccurate''.

Whereas speeds -- measured in the traditional Kilo Inferences Per Second
of 30-element naive reverse programs -- were 3-4 for old-style
interpreters and, perhaps, 10-20 on CISCy VAXen compilers -- modern
Prolog compilers are cabale of MLIPS on workstations (I'm speaking
specifically of the DEC 5000/200). 

The ``inference'' is equivalent to a reasonable number of traditional
instructions -- an _exact_ number is impossible to pin down
considering all the variables (and your exact definition of
``inference''), but perhaps about 100 at most.

Depending on the implementation model used -- continuation passing,
environment or goal stacking, structure sharing, etc -- various small hardware
enhancements -- such as the tag logic outlined in a previous posting --
would significantly improve performance.

In our work here at SUNY-B we can simple-mindedly transform Prolog
into C (or Ada -- the other popular favorite :-() that subsequently
run on said workstations at 250K. Some may consider this ``cheating''--
but it's reasonable to say the dataflow and register allocation
schemes used in the relevant C compiler are just now part of
a Prolog compiler that uses it as a second (and +)  pass. Saves
quite a bit of code cutting, too. :-)

Another idea that's been toy with here is efficient means for implementing 
closures, including various aspects of last-call-optimizations,
in RISCy hardware. This is obviously applicable to continuation-passing.
While this may not maximize Prolog performance specifically, it can also
be used in implementing functional programming languages.

-kym
From: Mark Johnson
Subject: Re: Prolog machines? WAS: Re: Ideas for new LISP Machines
Date: 
Message-ID: <1992Apr4.223203.18427@cs.brown.edu>
In article <·····················@skynet.uucp> ···@skynet.uucp (Ice) writes:
> ... Prolog is very slow, but ...

In my experience this is not so.  On the Motorola 88k with Chez Scheme
and ALS Prolog that I use (both of which compile rather than emulate, I
believe) it's hard to predict a priori which will be faster.
I'm not really knowledgable about the implementation details of
either of these systems, but here is what I believe is going on.
(comments and corrections from the more knowledgable would be
appreciated!)

Generally it seems that programs that can be more or less directly
fit into Prolog's top-down depth-first search run as well or better
than in scheme, perhaps because variable bindings and backtracking 
choice-points are manipulated very efficiently and stored in a stack
so memory can be reclaimed immediately on backtracking.

Comparable programs in scheme often have to be written in continuation-
passing style, so the alternative choices wind up being stored on
the heap rather than the stack.  So scheme programs typically turn
over much more heap memory, which must be reclaimed by gc. 

On the other hand, implementing certain specialized algorithms in 
Prolog can often be difficult or impossible.  For example, the
congruence closure algorithm relies on mutability to do "pointer-
redirection" to get its pseudo-linear performance, which Prolog
does not support.

Mark Johnson
From: Bob Kerns
Subject: Re: Ideas for new LISP Machines
Date: 
Message-ID: <RWK.92Apr4100025@taunton.crl.dec.com>
In article <··················@pecos.ads.com> ·····@pecos.ads.com (Clinton Hyde) writes:

   From: ·····@pecos.ads.com (Clinton Hyde)
   Date: 2 Apr 92 02:12:14 GMT
   the technical points are good, and on-target. the market competition
   was a $10k Sun3, and the LispMs couldn't compete on dollars, and the
   folks who make capital purchase decisions (who didn't care about the
   lisp part) only saw $90k cost difference. it's damn tough to overcome
   that. 

I don't think that $10k Suns and $100k Lispms ever coexisted.
When Sun was selling $10k diskless nodes with just enough memory
to boot SunOS, Symbolics' machines were around $30k for a much
larger configuration.

By the time you configured a Sun suitably for running Lisp,
a lot of that difference had evaporated.  (Start with buying
a large disk!)  Add in the cost of the Lisp software, too.
Sun did a good job of convincing people that the Lispm would
cost them $100k, though.

I think a much bigger issue than the $90k or $5k or -$2k
or whatever the real number was, is that once you spent it
on a Symbolics machine, you still had to run out and buy
a Sun, to run all your other software.  Unless you were
willing to commit to Symbolics and Lisp for ALL your compute
needs, you still had to include that ~$20k for another computer
(a Sun configured for useful non-Lisp work).
From: Robert Goldman
Subject: Re: Ideas for new LISP Machines [pricing]
Date: 
Message-ID: <RPG.92Apr4130144@clones.cs.tulane.edu>
on the pricing issue:  this arose while I was in graduate school,
and I believe for the academic market there were some other points
in the pricing issue that are worthy of discussion here:

1.  as in the commercial world, the people who buy the machines,
are not those who use them.  This problem may not be as bad in the
academic world, but still exists.

2.  you have to buy symbolics + unix boxes, if you buy symbolics as
opposed to just buying unix boxes.

3.  Sun gave a much bigger academic discount

4.  Symbolics maintenance was cripplingly expensive, esp. as
machines aged.  I'm surprised this issue hasn't been mentioned
before.  When I was in Grad School, the only disked-up machine we
had was a 3600, which was HORRIBLY expensive to maintain.  We might
not have crapped out the symbolics' if we had been able to use our
unix boxes as disk drives for the symbolics', but that seemed nigh
impossible (read expensive for impossible).  And buying disk drives
for all that software to upgrade one of our newer machines, so we
could let the 3600 die was also too expensive.  I maintained the
symbolics' myself, on a wing and a prayer, while I finished my
thesis.  I imagine they are in the elephants' graveyard by now.

R
From: Marty Hall
Subject: Re: Ideas for new LISP Machines [pricing]
Date: 
Message-ID: <1992Apr6.140702.22137@aplcen.apl.jhu.edu>
···@cs.tulane.edu (Robert Goldman) writes:
[Re Symbolics pricing]

>2.  you have to buy symbolics + unix boxes, if you buy symbolics as
>opposed to just buying unix boxes.

Very true. Still do. We use our Symbolics for LISP and the 
Suns for everything else (FrameMaker, C, etc.), so a Symbolics 
is clearly not  useful unless you do a *lot* of LISP.

>3.  Sun gave a much bigger academic discount

No longer true, however. We have purchased both Symbolics and
Suns this year for the program I work at the Johns Hopkins
Applied Physics Lab, and Symbolics discounts have been
much better. They give 45% off all hardware and hardware
maintenance, and usually charge a flat rate of $500 for
any of their unbundled software, no matter the original
price.

This pricing policy is only a couple of years old.

					- Marty
------------------------------------------------------
····@aplcen.apl.jhu.edu, ···········@jhunix.bitnet, ..uunet!aplcen!hall
Artificial Intelligence Lab, AAI Corp, PO Box 126, Hunt Valley, MD 21030

(setf (need-p 'disclaimer) NIL)
From: David V. Wallace
Subject: Ideas for new LISP Machines
Date: 
Message-ID: <GUMBY.92Apr5063926@Cygnus.COM>
I wish you could still execute out of data space.  This makes it
easier to build callable closures (and do downward funarg by
indirecting through the stack).  Gnu C has to stand on its head in
order to make this work.  Fortunately register windows are on their
way out; _those_ can make lexical variables in downward funarg expensive.

It would also be nice to cheaply be able to intercept page-miss in
user space.

I've talked about these to C hackers who always reply "but I've never
heard of anyone doing that."

A lot of nice features of the old machines turned out to be losers.
For instance, invisible pointers sounded like a good idea, but can
slow down every access.  And, as someone else pointed out, there are
alternatives (necessity is the mother of invention, I guess).