From: Kelly Murray
Subject: The New Lisp Machine
Date: 
Message-ID: <1rrkhdINN1h8@no-names.nerdc.ufl.edu>
I have been thinking about building a New Lisp Machine.
I would like to see what kind of support this project might get
before I start putting any more thought or time into the project.

The Symbolic's 3600 with release 5.3 was a great machine.
With roughly less than 5 mips of processor,
2 mbytes of memory, a 100mbyte disk, and a monochrome 1024x1024 display,
it made an incredibly productive workstation, and was a joy to use.

You can get this level of hardware today for close to $500.
I am considering a project to develop a free software system
that can transform this $500 hardware into the equivalent of a Lisp Machine.
A 386 PC is the hardware.
A large amount of existing free software already does much of the job.
BSD Unix, CMU's Common Lisp, GNU Emacs, X windows.

Am I the only one who wants to see this happen? 
Is anybody willing to volunteer to work on such a project?  
Are there people out there who would use such a system?

Your feedback appreciated,

 Kelly Murray (···@prl.ufl.edu)

From: MAEGAWA Hirotoshi
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <MAEGAWA.93May14115537@apus.pdp.crl.sony.co.jp>
In article <·················@arolla.idiap.ch> ···@arolla.idiap.ch (Thomas M. Breuel) writes:
> >>>>> On 11 May 93 12:21:00 GMT, ·······@pdp.crl.sony.co.jp (MAEGAWA Hirotoshi) said:
> >   The key issue is garbage collection.  When you solve fairly heavy
> > problems, that is, taking a couple of hours or more to execute, the
> > garbage collection will take 200% of the real data manipulation time.
> 
> You must be using a pretty poor garbage collector, or you must have
> started paging.  There is also no reason why GC overhead should be
> larger for long-running programs than for short-running ones (assuming
> a reasonably modern GC).

  We need to argue on resource handling time as well as the cpu time.
Even when data manipulation time including garbage collection is quite
first, if it takes much time fumbling disks and the paging mechanism, it
must be stupid.  Also stopping the paging or the garbage collector,
though we are doing that sometimes for measurement and evaluation,
cheats the total system performance.

  The problem is not caused only by memory reclamation but by data
swapping on a hierarchy of memory devices, which is quite expensive
computation process.  Since the block transfer in pages systems is
handled based on physical memory locations, the transfer includes
unnecessary data and garbage.
  The reason we are copying data between heaps is to improve the
locality of reference.  However, it does not make an intrinsic solution
and the data is yet scattered over the heap area before the copy.
  Dynamic memory allocation scheme using ephemeral garbage collection
helps the system solving relatively small problems.  However, when we
bring fairly large problems such as semiconductor process simulation and
other sophisticated AI problems, that scheme decrease the system
performance as it reallocates the memory spaces.

  Our scheme, MOLDS manages data between main and secondary memories
based on the data's structural characteristics.  It does not transfer
garbage to the secondary memory device.  MOLDS manages the secondary
memory space by means of data structures which have locality of
reference.  The garbage collector handles those structures in secondary
memory so that it does not go through actual data in the secondary
device.  In MOLDS, no complicated garbage collector is needed.
  Incidentally, using the secondary memory data structures, it can quite
easily take snapshot of the memory and intermediate situations such as
world and band used in the classic Lisp machines.
  See my paper in the proceeding of ACM's 1991 annual computer science
conference for the detail.


> > I would make a co-processor capable of fast type checking and
> > function/method dispatching.
> 
> It's not clear (to me or to many others) that providing anything other
> than absolutely minimal support for tagging or GC buys you anything.
> By "absolutely minimal", I mean some support on tags for fixnums (and
> maybe floating point numbers), and efficient user-level access to the
> page remapping hardware.

  Yes, it could be absolutely minimal.  In addition to it, something
would be useful.  For example, pointer tags that specifies kings of data
which is referenced by the pointer will reduce memory access frequency
and helps the system dispatching to those handing processes.
  Using MOLDS it is not necessary to manipulate paging mechanisms.

Maegawa
From: Fernando Mato Mira
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <1t05ru$rcm@disuns2.epfl.ch>
In article <·················@arolla.idiap.ch>, ···@arolla.idiap.ch (Thomas M. Breuel) writes:
> 
> > I would make a co-processor capable of fast type checking and
> > function/method dispatching.
> 
> It's not clear (to me or to many others) that providing anything other
> than absolutely minimal support for tagging or GC buys you anything.
> By "absolutely minimal", I mean some support on tags for fixnums (and
> maybe floating point numbers), and efficient user-level access to the
> page remapping hardware.
> 

From "Lisp Systems in the 1990s", by D. Kevin Layer and Chris Richardson, in
Communications of the ACM, Vol.34, Nr. 9, Sept. 1991 (special issue on Lisp):

"D. Johnson [in "Trap architectures for Lisp Systems", in the
Proceedings of the 1990 ACM Conference on Lisp and Functional Programming]
estimates that the cost in additional CPU logic of implementing fast traps
is only about 1.6% in the case of SPARC, while the increase in performace for
some Lisp programs could be as much as 35%" 

"Without the fast trap discussed earlier, the incremental garbage collection
technique cannot be used on stock hardware..."

"The hardware modifications proposed by Johnson will benefit not only Lisp,
but also full implementations of IEEE floating-point and any program that
needs traps to be fast."

Is this on by now in any of the latest architectures??



-- 
Fernando D. Mato Mira
Computer Graphics Lab			  "There is only one Language
Swiss Federal Institute of Technology	    and McCarthy is its prophet"
········@di.epfl.ch

FAX 	 : +41 (21) 693 - 5328

Disclaimer:

disclaim([],[]).
disclaim([H|T],[DH,DT]) :- fixed_point_disclaimer(H,DH),
			   disclaim(T,DT).
fixed_point_disclaimer(Text,fixed_point_disclaimer(Text,_)).
From: Mark Johnson
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <MJOHNSON.93May14084515@netcom.Netcom.COM>
In article <··········@disuns2.epfl.ch> ········@di.epfl.ch (Fernando Mato Mira) writes:
  $
  $  "D. Johnson [in "Trap architectures for Lisp Systems", in the
  $  Proceedings of the 1990 ACM Conference on Lisp and Functional Programming]
  $  estimates that the cost in additional CPU logic of implementing fast traps
  $  is only about 1.6% in the case of SPARC, while the increase in performace for
  $  some Lisp programs could be as much as 35%" 
  $
  $  "Without the fast trap discussed earlier, the incremental garbage collection
  $  technique cannot be used on stock hardware..."
  $
  $  "The hardware modifications proposed by Johnson will benefit not only Lisp,
  $  but also full implementations of IEEE floating-point and any program that
  $  needs traps to be fast."

What a glorious opportunity for research!  A small group at a university
can ACTUALLY BUILD HARDWARE and see these improvements.  The underlying
foundation (SPARC) is well-understood, quick implementation is possible
[the first SPARC, in the Sun 4/260, was just an LSI Logic 20K-gate gatearray],
excellent compilers are available, wonderful Pixie-like profiling and
tracing tools are available, SPARC International will give you
tons and tons of support code, tests, validation suites, and documentation
for the nominal $99 architectural license fee, and best of all
huge bodies of software are already present, waiting to be tried out
on the new idea.  Many IEEE floating point programs, many others
that need fast traps, and indeed a good number of Lisp programs are
sitting around, today, as SPARC binaries ... ready to go.

And, because of the large number of ex-Berkeley, ex-Lisp-hacker
personnel at Sun, who worked on SPUR and SOAR and so forth, plenty
of support (either moral or technical or financial or some combination
of the three) FROM SUN ITSELF could be drummed up by a thorough
and persistent researcher.

Unfortunately, research groups sometimes focus on "propose a new idea"
rather than "propose a new idea, implement it, test it out, and
report the results."  Let's hope the Mira team goes for it!

--
From: Fernando Mato Mira
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <1t0pae$avv@disuns2.epfl.ch>
In article <······················@netcom.Netcom.COM>, ········@netcom.Netcom.COM (Mark Johnson) writes:
> 
> Unfortunately, research groups sometimes focus on "propose a new idea"
> rather than "propose a new idea, implement it, test it out, and
> report the results."  Let's hope the Mira team goes for it!
 				       ^^^^
Hey, while the Miralab is our sister lab, and the "Mira Group", actually exists, that has
nothing to do with my surname (Mato Mira). You know, this is a computer graphics lab (too
production-oriented maybe) and I am just lucky enough that I can try to do something in Lisp.
Now, I would LOOVE to get involved into all this Lisp hardware hacking, and probably somebody
in the VLSI lab here would be interested, but I doubt that anything can start unless one gets
actively involved in it. I currently cannot take yet another assignment. Now, I guess it would
be in the interest of SGI to try to "steal" AI clients from Sun, and Sun would definetely
try to avoid it, so I do not see why they couldn't spend some bucks checking that out. Maybe
they could finance some graduate student to do it for them.

And, again, this is not just a Lisp thing, so maybe there are some $$$$$$$ in it after all.

-- 
Fernando D. Mato Mira
Computer Graphics Lab			  "There is only one Language
Swiss Federal Institute of Technology	    and McCarthy is its prophet"
········@di.epfl.ch

FAX 	 : +41 (21) 693 - 5328

Disclaimer:

disclaim([],[]).
disclaim([H|T],[DH,DT]) :- fixed_point_disclaimer(H,DH),
			   disclaim(T,DT).
fixed_point_disclaimer(Text,fixed_point_disclaimer(Text,_)).
From: Dave Kohr
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <C71Asx.LpJ@cs.uiuc.edu>
In article <······················@netcom.Netcom.COM>
········@netcom.Netcom.COM (Mark Johnson) writes:
>What a glorious opportunity for research!  A small group at a university
>can ACTUALLY BUILD HARDWARE and see these improvements.

The Alewife project at MIT has already done something similar to this: they
have modified an LSI Logic SPARC design to support fast switching among
multiple contexts, using the existing SPARC register windows to store the
processor state for each context.  The modified CPU's (which they call
Sparcle's) form the nodes (processing elements) of a cache-coherent
multiprocessor (the Alewife machine).  The best reference I have to this
project is an MIT technical memo:

	Agarwal, Chaiken, et al., "The MIT Alewife Machine: A Large-Scale
	Distributed-Memory Multiprocessor", MIT/LCS TM-454, 1991.

There must be more recent papers written about it, though.
-- 
Dave Kohr     CS Graduate Student     Univ. of Illinois at Urbana-Champaign
Work: 3244 DCL, (217)333-6561  Home: (217)359-9350  E-mail: ···@cs.uiuc.edu
                   "One either has none or not enough."
From: Kelly Murray
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <1t0vr1INNd8h@no-names.nerdc.ufl.edu>
|> "D. Johnson [in "Trap architectures for Lisp Systems", in the
|> Proceedings of the 1990 ACM Conference on Lisp and Functional Programming]
|> estimates that the cost in additional CPU logic of implementing fast traps
|> is only about 1.6% in the case of SPARC, while the increase in performace for
|> some Lisp programs could be as much as 35%" 
|> 
|> "Without the fast trap discussed earlier, the incremental garbage collection
|> technique cannot be used on stock hardware..."

Most existing RISC chips already have this ``fast trap'' hardware.
The problem is that these fast traps are caught by the kernel code, 
and not user code.  They become high-overhead Unix signals to a user process,
and therefore are not useful to a Lisp system running under Unix.

This is one reason why the New LispM project proposes Lisp as the OS kernel.
With all the sources available, it would provide an excellent platform for 
experimenting with different gc and other ``system''-level ideas.

>From: ········@netcom.com (Mark Johnson)
>What a glorious opportunity for research!  A small group at a university
>can ACTUALLY BUILD HARDWARE and see these improvements.  The underlying
>foundation (SPARC) is well-understood, quick implementation is possible
>[the first SPARC, in the Sun 4/260, was just an LSI Logic 20K-gate gatearray],
>excellent compilers are available, wonderful Pixie-like profiling and
>tracing tools are available, SPARC International will give you
>tons and tons of support code, tests, validation suites, and documentation
>for the nominal $99 architectural license fee, and best of all
>huge bodies of software are already present, waiting to be tried out
>on the new idea.  Many IEEE floating point programs, many others
>that need fast traps, and indeed a good number of Lisp programs are
>sitting around, today, as SPARC binaries ... ready to go.

>And, because of the large number of ex-Berkeley, ex-Lisp-hacker
>personnel at Sun, who worked on SPUR and SOAR and so forth, plenty
>of support (either moral or technical or financial or some combination
>of the three) FROM SUN ITSELF could be drummed up by a thorough
>and persistent researcher.

>Unfortunately, research groups sometimes focus on "propose a new idea"
>rather than "propose a new idea, implement it, test it out, and
>report the results."  Let's hope the Mira team goes for it!

Actually building hardware is certainly not feasible for me, nor for most
of interested Lisp'ers out there.  The open nature of the SPARC
makes it a good choice for this kind of thing for sure.

I think running on PC hardware would make any efforts much more widely
useable as a research platform, though certainly not state-of-the-art.
Isn't it possible to build a custom co-processor for a 386 system??  
Can't you replace a 387 with your own chip?

-Kelly Murray
From: Robert Krajewski
Subject: Re: The New Lisp Machine
Date: 
Message-ID: <ROBERTK.93May14142323@rkrajewski.lotus.com>
In article <··················@nervous.cis.ohio-state.edu> ·····@nervous.cis.ohio-state.edu (Arun Welch) writes:

   In article <············@no-names.nerdc.ufl.edu> ···@prl.ufl.edu (Kelly Murray) writes:


      Third, the idea was not to run on top of Unix, but to replace it, which would
      yeild much of the advantages of the LispM in terms of high integration,
      and thus less memory and disk requirements. 

   Unfortunately, the market pretty much requires Unix these days. Just
   ask anyone from TI, Xerox, IIM, LMI, or Symbolics (did I miss
   anyone?), or if you're willing to look further afield, PERQ, Apollo,
   DG, DEC, etc.... The hardware vendors were losing marketshare to the 
   Unix-based software even with a performance/development environment
   advantage. 

Actually, it depends what you mean by "the market." As the workstation
and high-end PC markets blur, non-Unix operating systems such as
Windows NT and Pink will be just as plausible for advanced software as
Unix is today. Windows NT will run on the Pentium and Alpha
processors, for example.

Indeed, I think one intriguing way of propagating the Lisp Machine
"gospel" would be a implement a co-existent Lisp Machine subsystem as
a client of a microkernel. I don't know if any serious, large non-Unix
subsystems (that coexist with the resident Unix) have been built on
top of Mach (does NextStep count ? is it really "different" from
Unix ?), but it's a possibility. A serious developer might also be able
to get help from Microsoft to build such a subsystem as a client of
the NT microkernel. Although Microsoft has not publicly commited to
opening up the NT internal API so that anybody can write a subsystem,
they will release the NT source code to interested academic
institutions. And anybody who is really gung-ho about this sort of
thing should be able to figure out what to do from the source code.

By using the subsystem approach, you should be able to share processor
and storage resources efficiently so that the "normal" subsystems that
support productivity applications and tools can still run.