From: ···@sef-pmax.slisp.cs.cmu.edu
Subject: Re: Lisp Machine architecture and other hardware support for Lisp
Date: 
Message-ID: <CJMJnv.82q.3@cs.cmu.edu>
    From: ········@serd.cscs.ch (Bryan O'Sullivan)
    
    I am looking for information on the hardware and software architecture
    of the Symbolics Lisp Machine...

The January 1987 issue of IEEE COmputer (special issue on computers for AI
Applications) has a good overview article of the Symbolics architecture by
Dave Moon.
    
    Also, I understand that the SPARC was originally designed with
    efficient execution of Lisp code in mind.  Does anyone have any
    details, or perhaps know where I can find them (all I am fuzzily aware
    of some hardware support for tags)?
    
Well, any RISC machine with a big address space is pretty good for Lisp,
but I think the tag instructions are the only thing special on the Sparc.
But they aren't quite what we wanted for CMU CL, and I don't think we use
them (unless my elves have done something clever while I wasn't looking).
I don't know about the other majors Lisps on the Sparc.

-- Scott

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Senior Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 681-5739
Carnegie Mellon University		Latitude:  40:26:33 N
5000 Forbes Avenue			Longitude: 79:56:48 W
Pittsburgh, PA 15213
===========================================================================

From: Simon Leinen
Subject: Re: Lisp Machine architecture and other hardware support for Lisp
Date: 
Message-ID: <SIMON.94Jan14164536@liasg3.epfl.ch>
sef> Well, any RISC machine with a big address space is pretty good
sef> for Lisp, but I think the tag instructions are the only thing
sef> special on the Sparc.  But they aren't quite what we wanted for
sef> CMU CL, and I don't think we use them (unless my elves have done
sef> something clever while I wasn't looking).

Yes you do! Look at the "TADDCCTV" ("tagged add and modify icc and
trap on overflow") below:

% cmucl
; Loading #p"/export/babar/users/simon/.cmucl-init.lisp".
CMU Common Lisp 2, running on liasun6
Send bug reports and questions to ··········@cs.cmu.edu.
Loaded subsystems:
    Python 1.0, target SPARCstation/Sun 4
    CLX X Library MIT R5.01
    CLOS based on PCL version:  September 16 92 PCL (f)
* (defun foo (x) (declare (type fixnum x)) (the fixnum (+ x x)))

FOO
* (disassemble 'foo)
Compiling LAMBDA (X): 
Compiling Top-Level Form: 

10014320:       .ENTRY "LAMBDA (X)"(x)       ; (FUNCTION (FIXNUM) FIXNUM)
      38:       ADD   -18, %CODE
      3C:       ADD   %CFP, 32, %CSP

      40:       CMP   %NARGS, 4              ; %NARGS = #:G1
      44:       BNE   L0
      48:       NOP
      4C:       TADDCCTV %ZERO, %A0          ; %A0 = #:G2
      50:       MOVE  %A0, %NFP
      54:       TADDCCTV %NFP, %NFP, %A0     ; No-arg-parsing entry point
      58:       MOVE  %CFP, %CSP
      5C:       MOVE  %OCFP, %CFP
      60:       J     %LRA+5
      64:       MOVE  %LRA, %CODE
      68: L0:   UNIMP 10                     ; Error trap
      6C:       BYTE  #x04
      6D:       BYTE  #x19                   ; INVALID-ARGUMENT-COUNT-ERROR
      6E:       BYTE  #xFE, #xEB, #x01       ; NARGS
      71:       .ALIGN 4
* 

sef> I don't know about the other majors Lisps on the Sparc.

The current versions of Allegro and Lucid use the tagged instructions,
too.  Don't know about the others.
-- 
Simon.
From: Rob MacLachlan
Subject: Re: Lisp Machine architecture and other hardware support for Lisp
Date: 
Message-ID: <CJMs5q.Crw.3@cs.cmu.edu>
In article <············@cs.cmu.edu>,  <···@sef-pmax.slisp.cs.cmu.edu> wrote:
>    
>Well, any RISC machine with a big address space is pretty good for Lisp,
>but I think the tag instructions are the only thing special on the Sparc.
>But they aren't quite what we wanted for CMU CL, and I don't think we use
>them (unless my elves have done something clever while I wasn't looking).
>I don't know about the other majors Lisps on the Sparc.
>

Actually, CMU CL does used tagged add on the SPARC, although in the code I've
studied, it ends up being used most often purely to do type checking
by adding a constant 0.  The SPARC conditional trap instructions turn out
to be more generally useful.  In a MIPS/SPARC comparison I did, the sparc
gained from these type checking features, but the MIPS cancelled out
this gain by having a branch-eq instruction.

The main SPARC feature we don't use is register windows.  Conceptually
register windows are probably a good thing for a call-intensive 
dynamically-linked language, but managing the register window pointer
is priviledged, and (at least at the time we looked at it), supporting
THROW would have had a large complexity and efficency penalty.

  Rob
From: Paul Wilson
Subject: CPU & Memory system requirements for Lisp (was Re: Lisp Machine arch.)
Date: 
Message-ID: <2hkppv$d74@boogie.cs.utexas.edu>
I think the general consensus is that most of the Lisp-specific features
of Lisp machines were either not a huge win or not a win at all, if
your goal is to make a single Lisp application run fast.  (It's a bit
different if your goal is to make a Lisp system into the operating
system, as on Lisp Machines.  In that case, you're a little more
worried about heap integrity in the face of bugs, etc.---e.g., a mangled
pointer can bring the whole system down by confusing the garbage collector.)

On the other hand, Lisp and other garbage-collected systems have peculiar
memory referencing behavior, and some memory hierarchy configurations
will work a lot better than others.  As processors get faster and faster,
cache misses can become a problem if you don't have the right kind of
cache.  (More on that in a second...)

David Ungar's Ph.D. thesis (about the SOAR RISC machine for Smalltalk, 
which influenced the SPARC) indicated that most hardware support for Lisp
or Smalltalk would have only small effects on performance, with perhaps
significant effects on cycle time, design time and time-to-market, making
them not worthwhile.  That's a controversial conclusion, because it's
unclear that most of those features really impact cycle time if done
right, or take up too much design time or chip area now that all major
CPU's are large design projects anyhow and chip densities are very
high.  Nobody designs a classic 70K transistor RISC machine any more
anyway, and it's unclear that Lisp support is any harder to do than a
lot of other things people do anyway.

One thing that Ungar dropped from Lisp machines when designing his
Smalltalk machine was the ``read barrier'' required for incremental
copying garbage collection.  In my view, this was a good choice because
it's most likely to interact poorly with the pipeline of a modern
CPU, and it still doesn't guarantee usefully real-time performance.
(Kelvin Nilsen has shown how to get real real-time performance for
copying collection with a custom memory subsystem, rather than by
interfering with the CPU design.  My own approach is to use non-copying
GC for real-time purposes, which means I can run on stock hardware
but have to deal with fragmentation problems.)

Of the few features that Ungar decided were worthwhile for supporting
languages like Lisp and Smalltalk, none are generally agreed to be
worthwhile now.

Register windows have become much less attractive now that better calling
disciplines and register allocating compilers are available.  They may
still be a win under some circumstances, and the large register file
may not really slow your cycle time significantly or take up a significant
percentage of a modern-sized chip, but compiler developers hate them.

Tagged arithmetic instructions are useful, but are less critical when you
have good type declarations (as in most real Common Lisp programs) or good
optimizing compilers (as Ungar himself has developed with Chambers and 
Hoelzle).

"Write barrier" support for generational garbage collection turns out
not to be a great idea, because there are new ways of implementing write
barriers entirely in software that appear to have comparable usual-case
performance and much better worst-case performance.

One thing that may be a good idea is to have either small pages or
small sub-page units of protection (as the ARM 600 does), and make the
OS dirty bits readable by the garbage collector, and make protection-setting
and trap handling as fast as possible.  Those features allow you to implement
all kinds of nifty features for garbage collection and memory hierarchy
tweaking.


But back to basic memory-referencing behavior.  A coupla years ago,
I published a paper showing that garbage-collected systems tend to have
lousy temporal locality of reference at timescales shorter than the
usual garbage collection cycle.  Generational garbage collection can
help a lot, at the level of virtual memory, by keeping the normal
memory reuse cycle much shorter, i.e., reusing memory while it's still
in RAM.  Similar effects happen at the level of cache, and they're
getting to be increasingly important as CPU's get faster.  Caches
smaller than the youngest generation suffer cache miss traffic at
least equal to the rate of allocation, and a similar amount of write-back
traffic.  The write-backs require somewhat deeper write buffers
than normal programs, or some equivalent architectural feature that
makes it possible to write lots of stuff from the cache without stalling.
The reads due to allocation could be optimized away with a funky cache
design that lets you allocate space in cache without reading the
old contents of the blocks.

Recently, Diwan, Tarditi, and Moss of U. Mass. and CMU have shown that
some existing memory systems already do the right optimizations,
by supporting high write rates and by doing sub-block placement where
the subblocks are word-sized.  (Subblock placement allows initializing
writes to write a whole sub-block, leaving the other blocks invalid
rather than stalling the CPU and waiting for the rest of the block
to be loaded.  This gives the desired effect of allowing allocation
to proceed without stalling the processor for faulted-on blocks
that are only going to be overwritten anyhow.)


There are papers available via ftp about much of this stuff available
via anonymous ftp from cs.utexas.edu, in the directory pub/garbage.
It's a repository of papers about GC, memory hierarchies, and related
subjects.  There's an extensive bibliography file in .bib format, which
refers to Ungar and others' work as well as Lisp Machine stuff.

Some papers relevent to this posting:

  pub/garbage/gcsurvey.ps has my survey paper about garbage collection

  pub/garbage/cache.ps is my paper about cache effects of GC

  pub/garbage/GC93/hoelzle.ps talks about a very fast software write
  barrier.

  pub/garbage/GC93/nilsen.ps talks about the difficulties of doing
  hard real-time copying collection.

  pub/garbage/GC93/wilson.ps talks about real-time non-copying collection
  for stock hardware.

Have a look at pub/garbage/README for more directions.

Enjoy,

   Paul


-- 
| Paul R. Wilson,   Computer Sciences Dept.,   University of Texas at Austin  |
| Taylor Hall 2.124,  Austin, TX 78712-1188       ······@cs.utexas.edu        |
| (Recent papers on garbage collection, memory hierarchies, and persistence   |
| are available via anonymous ftp from cs.utexas.edu, in pub/garbage.)        |
From: John McClain
Subject: Re: CPU & Memory system requirements for Lisp (was Re: Lisp Machine arch.)
Date: 
Message-ID: <PDP8.94Jan21134039@teenage-mutant.ai.mit.edu>
[Note: My master thesis is on a lisp friendly architecture so my take on all
this may be a little different]

In article <··········@boogie.cs.utexas.edu> ······@cs.utexas.edu (Paul Wilson) writes:

>   Register windows have become much less attractive now that better calling
>   disciplines and register allocating compilers are available.  They may
>   still be a win under some circumstances, and the large register file
>   may not really slow your cycle time significantly or take up a significant
>   percentage of a modern-sized chip, but compiler developers hate them.

Are register windows really that bad an idea, or is it just the SPARC
implementation of the idea?  The two places they seem to lose is the
fixed size, and the fact you need to take a kernel trap to
spill/unspill them?  Does anyone have any data on how register windows
do without these limitations?  Is there something I am missing?  Why
do compiler developers hate them?

>   Tagged arithmetic instructions are useful, but are less critical when you
>   have good type declarations (as in most real Common Lisp programs)...

This bugs me a bit since one of the big reasons I like Lisp is because
I don't need to do type declarations all over the place.

>   ...or good optimizing compilers (as Ungar himself has developed with 
>   Chambers and Hoelzle).

Compilers have come a long way but:

   Olin Shivers in his chapter on data flow and type recovery in
   scheme from _Topics in advanced language implementation_ suggest
   than bignums still cause problems as range analyses still is not
   very good (does anyone have any data on how good range analyses is?)

   Robert MacLachlan suggests in his "Lisp v.s. RISC" paper that some
   poorly declared Lisp programs take large speed hits from generic
   arithmetic.  His data is from CMU's Python compiler which is
   supposed to be very good at type inference. 
 

>  "Write barrier" support for generational garbage collection turns out
>   not to be a great idea, because there are new ways of implementing write
>   barriers entirely in software that appear to have comparable usual-case
>   performance and much better worst-case performance.

I would be interested is seeing the references for these...is the
Hoelzle paper?


John W.F. McClain
From: Urs Hoelzle
Subject: Re: CPU & Memory system requirements for Lisp (was Re: Lisp Machine arch.)
Date: 
Message-ID: <hoelzle.759267037@Xenon.Stanford.EDU>
····@ai.mit.edu (John McClain) writes:

>In article <··········@boogie.cs.utexas.edu> ······@cs.utexas.edu (Paul Wilson) writes:

>>   Register windows have become much less attractive now that better calling
>>   disciplines and register allocating compilers are available.  They may
>>   still be a win under some circumstances, and the large register file
>>   may not really slow your cycle time significantly or take up a significant
>>   percentage of a modern-sized chip, but compiler developers hate them.

>Are register windows really that bad an idea, or is it just the SPARC
>implementation of the idea?  The two places they seem to lose is the
>fixed size, and the fact you need to take a kernel trap to
>spill/unspill them?  Does anyone have any data on how register windows
>do without these limitations?  Is there something I am missing?  Why
>do compiler developers hate them?

I love them; if your compiler needs to be fast, they're a big help
because you can get good results with a simple and fast allocator,
since you don't have to worry about spills & reloads around calls.

The current (SPARC V8) trap model is brain-dead, however, since the
window overflow/underflow traps are kernel-mode.  The SPARC V9
definition fixes that, and as a result the trap overhead should go
from something like 200 cycles to about 20-40 cycles per trap.  With
such fast traps, windows are probably a win.  

[I did a study for our Self system (which is more call-intensive than
SPEC int benchmarks and has more overflows/underflows).  I assumed
that only 2.5 values need to be stored (and reloaded) per call,
including SP & PC, using a state-of-the-art allocator without register
windows.  (I think that's pretty conservative.)  With SPARC V9 traps,
register windows saved about 5-7% of execution time for the programs
measured, compared to a system using explicit loads & stores.  With
current traps, it's about even.]

-Urs
From: Paul Wilson
Subject: Re: CPU & Memory system requirements for Lisp (was Re: Lisp Machine arch.)
Date: 
Message-ID: <2hv49c$8oe@jive.cs.utexas.edu>
I should probably have been clearer in my earlier posting that I wasn't
really saying that register windows and tagged arithmetic were not
a win for Lisp---just that they're not as big a win as was once believed.

In article <··················@teenage-mutant.ai.mit.edu>,
John McClain <····@ai.mit.edu> wrote:
>
>In article <··········@boogie.cs.utexas.edu> ······@cs.utexas.edu (Paul Wilson) writes:
>
>>   Register windows have become much less attractive now that better calling
>>   disciplines and register allocating compilers are available.  They may
>>   still be a win under some circumstances, and the large register file
>>   may not really slow your cycle time significantly or take up a significant
>>   percentage of a modern-sized chip, but compiler developers hate them.
>
>Are register windows really that bad an idea, or is it just the SPARC
>implementation of the idea?  The two places they seem to lose is the
>fixed size, and the fact you need to take a kernel trap to
>spill/unspill them?  Does anyone have any data on how register windows
>do without these limitations?  Is there something I am missing?  Why
>do compiler developers hate them?

I'm not clear on the details of any of the non-SPARC register windowing
schemes, so to a first approximiation I was indeed referring to SPARC
(< v.9) windows.  They complicate your life if you're trying to implement 
things like threads or call-with-current-continuation, or anything that
uses access-protection tricks on the stack segment (e.g., guard page to
trigger GC flips, pagewise incremental stack scanning for incremental GC).
Even if in principle they can be nice, they add complications that make
it more likely you'll have bugs---including OS kernel bugs where virtual
memory features don't work as advertised on the stack segment :-(.

One compilication that I suspect happens in most windowing schemes
is that you have to implement normal register allocation anyway, because
windows are of bounded size.  So you have to deal with the complications
due to flushing and restoring register windows, plus the complications of
general register allocation.  (The register windows can hide insidious
bugs in your register allocator, too---the bugs only show up in a few
places rather than all the time.)

>>   Tagged arithmetic instructions are useful, but are less critical when you
>>   have good type declarations (as in most real Common Lisp programs)...
>
>This bugs me a bit since one of the big reasons I like Lisp is because
>I don't need to do type declarations all over the place.

Good point.  On the other hand, my understanding is that tagged instructions
only give you a fraction of the benefits of type declaration or type
inference.  Actually knowing the types makes it much easier to generate
fast, compact code---without lots of branches that inhibit conventional 
optimizations for subexpression elimination, constant folding, pipelining,
etc.

>Compilers have come a long way but:
>
>   Olin Shivers in his chapter on data flow and type recovery in
>   scheme from _Topics in advanced language implementation_ suggest
>   than bignums still cause problems as range analyses still is not
>   very good (does anyone have any data on how good range analyses is?)

This suggests to me that bignums are not a great idea in a general-purpose
programming language.  I think that for most programs, it would be better
if the language (perhaps based on a compiler option) ensured that fixnums
would not silently overflow into bignums and radically change the
performance characteristics of your code.  (This is not to say that
bignums aren't great for certain classes of programs.  Just that most
of the programs I write aren't ever intended to use bignums.  I'd rather
have my overflows trap and get an error message.)

>   Robert MacLachlan suggests in his "Lisp v.s. RISC" paper that some
>   poorly declared Lisp programs take large speed hits from generic
>   arithmetic.  His data is from CMU's Python compiler which is
>   supposed to be very good at type inference. 

How much of this performance problem can be solved with simple hardware,
and how much will require better declarations and/or inference?  Good
profiling tools could also be more of a win than specialized hardware---e.g.,
pointing out the handful of places in your code where a fixnum declaration
will have a big impact on performance.  (It's not always as simple as it
sounds, which is why *good* profiling tools are important.)

>>  "Write barrier" support for generational garbage collection turns out
>>   not to be a great idea, because there are new ways of implementing write
>>   barriers entirely in software that appear to have comparable usual-case
>>   performance and much better worst-case performance.
>
>I would be interested is seeing the references for these...is the
>Hoelzle paper?

Applying a little syntactic error recovery, yes, I think so...
Urs' paper from the OOPSLA '93 GC workshop is the latest and greatest
in this vein.  But you should look at my OOPSLA '89 paper "Design of
the Opportunistic Garbage Collector" for the basic "card marking" write
barrier scheme and the motivations behind it.  (Urs' collector is a
maximally tweaked version of that, which is in turn a tweaked software
implementation of a variant of Moon's write barrier... actually,
you could call it the Moon-Sobalvarro-Wilson-Ungar-Chambers-Hoelzle
write barrier.  You could even throw in Appel-Ellis-Li because I stole a
little trick from them too. :-)
-- 
| Paul R. Wilson,   Computer Sciences Dept.,   University of Texas at Austin  |
| Taylor Hall 2.124,  Austin, TX 78712-1188       ······@cs.utexas.edu        |
| (Recent papers on garbage collection, memory hierarchies, and persistence   |
| are available via anonymous ftp from cs.utexas.edu, in pub/garbage.)        |
From: Rick Hudson
Subject: Re: CPU & Memory system requirements for Lisp (was Re: Lisp Machine arch.)
Date: 
Message-ID: <HUDSON.94Jan24104845@yough.ucc.umass.edu>
>>>  "Write barrier" support for generational garbage collection turns out
>>>   not to be a great idea, because there are new ways of implementing write
>>>   barriers entirely in software that appear to have comparable usual-case
>>>   performance and much better worst-case performance.
>>
>>I would be interested is seeing the references for these...is the
>>Hoelzle paper?

> Applying a little syntactic error recovery, yes, I think so...
> Urs' paper from the OOPSLA '93 GC workshop is the latest and greatest
> in this vein.  But you should look at my OOPSLA '89 paper "Design of
> the Opportunistic Garbage Collector" for the basic "card marking" write
> barrier scheme and the motivations behind it.  (Urs' collector is a
> maximally tweaked version of that, which is in turn a tweaked software
> implementation of a variant of Moon's write barrier... actually,
> you could call it the Moon-Sobalvarro-Wilson-Ungar-Chambers-Hoelzle
> write barrier.  You could even throw in Appel-Ellis-Li because I stole a
> little trick from them too. :-)

At the same GC workshop Tony Hosking and I suggested that a hybrid scheme that
used card marking as well as remembered sets might actually be the best
solution.  See ibis.cs.umass.edu /pub/papers/... for several of our recent
papers on write barrier performance, gc and persistence in general, including
the Diwan/Tardidi/Moss papers on the total cost of memory management.
--

       Richard L. Hudson  -  ······@cs.umass.edu; (413) 545-1220; 
       Advanced Languages Project - University Computing Services
       Lederle Graduate Research Center
       University of Massachusetts             Amherst, MA  01003
From: Paul Wilson
Subject: Re: CPU & Memory system requirements for Lisp (was Re: Lisp Machine arch.)
Date: 
Message-ID: <2i1bn6$9vh@jive.cs.utexas.edu>
In article <····················@yough.ucc.umass.edu>,
Rick Hudson <······@cs.umass.edu> wrote:
>>>I would be interested is seeing the references for these...is the
>>>Hoelzle paper?
>
>> Applying a little syntactic error recovery, yes, I think so...
>> Urs' paper from the OOPSLA '93 GC workshop is the latest and greatest
>> in this vein.  But you should look at my OOPSLA '89 paper "Design of
>> the Opportunistic Garbage Collector" for the basic "card marking" write
>> barrier scheme and the motivations behind it.  (Urs' collector is a
>> maximally tweaked version of that, which is in turn a tweaked software
>> implementation of a variant of Moon's write barrier... actually,
>> you could call it the Moon-Sobalvarro-Wilson-Ungar-Chambers-Hoelzle
>> write barrier.  You could even throw in Appel-Ellis-Li because I stole a
>> little trick from them too. :-)
>
>At the same GC workshop Tony Hosking and I suggested that a hybrid scheme that
>used card marking as well as remembered sets might actually be the best
>solution.

Good point.  This feature can be combined with Urs' write barrier, making it
a Moon-Sobalvarro-Wilson-Appel-Ellis-Li-Ungar-Chambers-Hoelzle-Hudson-Hosking
scheme.


(The idea of card marking is that you keep dirty bits in software saying
which areas of memory have had a pointer stored into them lately, and
you scan those at GC time to find pointers into younger generations.  This
allows you to garbage collect younger objects without actually traversing
all of the data in older generations.  The advantage over some earlier
schemes is that the cost per pointer store is fixed, and the areas you
have to scan are small---unlike record which objects were stored into,
which would cause you to scan large objects sometimes.  (That was Ungar's
original stock hardware scheme.)  Urs' write barrier is variant with
faster marking.  The Hosking-Hudson version avoids a different problem:
at the end of a GC, you may still have pointers from older data into
younger ones, and you have to remember where they are.  My scheme
just remembers which cards hold them, and has to scan those cards again
at the next GC.  Hosking and Hudson optimize this so that the whole card
doesn't have to be scanned again if it's not stored into again---they
remember which individual words have pointers into younger generations,
and only have to look at those words at the next GC.

This paper is also available in our repository (cs.utexas.edu:
pub/garbage/GC93/hosking.ps).

>  See ibis.cs.umass.edu /pub/papers/... for several of our recent
>papers on write barrier performance, gc and persistence in general, including
>the Diwan/Tardidi/Moss papers on the total cost of memory management.

>
>       Richard L. Hudson  -  ······@cs.umass.edu; (413) 545-1220; 
>       Advanced Languages Project - University Computing Services
>       Lederle Graduate Research Center
>       University of Massachusetts             Amherst, MA  01003


-- 
| Paul R. Wilson,   Computer Sciences Dept.,   University of Texas at Austin  |
| Taylor Hall 2.124,  Austin, TX 78712-1188       ······@cs.utexas.edu        |
| (Recent papers on garbage collection, memory hierarchies, and persistence   |
| are available via anonymous ftp from cs.utexas.edu, in pub/garbage.)        |
From: Ted Dunning
Subject: Re: Lisp Machine architecture and other hardware support for Lisp
Date: 
Message-ID: <TED.94Jan14190435@lole.crl.nmsu.edu>
In article <············@cs.cmu.edu> ···@sef-pmax.slisp.cs.cmu.edu writes:


   Well, any RISC machine with a big address space is pretty good for Lisp,
   but I think the tag instructions are the only thing special on the Sparc.

they claimed that the alignment constraints could be used to open code
car and cdr and structure accessors in one instruction which faulted
on illegal type.
From: Guy Harris
Subject: Re: Lisp Machine architecture and other hardware support for Lisp
Date: 
Message-ID: <19605@auspex-gw.auspex.com>
>   Well, any RISC machine with a big address space is pretty good for Lisp,
>   but I think the tag instructions are the only thing special on the Sparc.
>
>they claimed that the alignment constraints could be used to open code
>car and cdr and structure accessors in one instruction which faulted
>on illegal type.

Who are "they", and are they claiming that this makes SPARC special
(unlikely, as most other RISC architectures have similar alignment
constrints), or that this makes SPARC *not* special (e.g., claiming that
the tagged instructions weren't really needed), or neither?