From: Petter Gustad
Subject: Lisp cluster
Date: 
Message-ID: <87wv9jd022.fsf@scintight.dolphinics.no>
I'm looking for pointers to research papers regarding parallel lisp
systems running on clusters. I tried searching on the net as well as
some IEEE proceedings without finding very much of interest. Maybe
the lisp research community did this 30-40 years ago so the papers are
only found in old research journals, or carved out in stone (image
from an old Byte cover with lisp code written on a stone on the moon
or some other planet appearing in my head :-) Some other thoughts:

* Lisp has an advantage over most other languages, i.e. it can do a
  simple remote-eval (rpc,rmi) on a different node in the cluster.

* CLOS object instances could live and migrate through the nodes in
  the cluster. A scheduler could issue an eval where the communication
  overhead would be low.

* Thinking Machines were quite focused on lisp, they must have done
  some extensive work in this field. Steele was working for TM if
  memory serves me right.

Thanks

Petter
-- 
________________________________________________________________________
Petter Gustad       8'h2B | (~8'h2B) - Hamlet      http://www.gustad.com
#include <stdio.h>/* compile/run this program to get my email address */
int main(void) {printf ("petter\100gustad\056com\nmy opinions only\n");}

From: Tim Bradshaw
Subject: Re: Lisp cluster
Date: 
Message-ID: <nkjd7bbpgkn.fsf@tfeb.org>
Petter Gustad <····@gustad.com> writes:

> * Thinking Machines were quite focused on lisp, they must have done
>   some extensive work in this field. Steele was working for TM if
>   memory serves me right.
> 

You want to look for *lisp which was the lisp system that ran on the
?TM1?.

Please post anything interesting you find (on the whole area) or mail
me as I'm interested in this stuff too!

--tim
From: Harvey J. Stein
Subject: Re: Lisp cluster
Date: 
Message-ID: <kiwn1afjkrr.fsf@blinky.bloomberg.com>
Tim Bradshaw <···@tfeb.org> writes:
 > Petter Gustad <····@gustad.com> writes:
 > I'm looking for pointers to research papers regarding parallel lisp
 > systems running on clusters. I tried searching on the net as well as
 > some IEEE proceedings without finding very much of interest.
 > 
 > You want to look for *lisp which was the lisp system that ran on the
 > ?TM1?.

There's also a Kali Scheme & StarLogo.

Kali Scheme is "a distributed implementation of Scheme that permits
efficient transmission of higher-order objects such as closures and
continuations".  Info is available at:

   http://www.neci.nj.nec.com/PLS/Kali.html

StarLogo is a version of Logo intended to support large numbers of
threads of parallel computation.  It's not intended to run on
clusters, but it may be amenable to such an implemenation & results
for StarLogo might be applicable to your needs.  Info on StarLogo can
be found at:

   http://el.www.media.mit.edu/groups/el/Projects/starlogo/community/index.html
   http://wex.www.media.mit.edu/courses/mas712/slweb/

-- 
Harvey Stein
Bloomberg LP
·······@bfr.co.il
From: Harvey J. Stein
Subject: Re: Lisp cluster
Date: 
Message-ID: <kiwitl3jf66.fsf@blinky.bloomberg.com>
·······@bfr.co.il (Harvey J. Stein) writes:

 > Tim Bradshaw <···@tfeb.org> writes:
 >  > Petter Gustad <····@gustad.com> writes:
 >  > I'm looking for pointers to research papers regarding parallel lisp
 >  > systems running on clusters. I tried searching on the net as well as
 >  > some IEEE proceedings without finding very much of interest.
 >  > 
 >  > You want to look for *lisp which was the lisp system that ran on the
 >  > ?TM1?.
 > 
 > There's also a Kali Scheme & StarLogo.

There're also a bunch of other parallelizable/distributed languages
around that are functional to various degrees.  Sisal (from Lawrence
Livermore Labs, http://www.llnl.gov/sisal/) comes to mind.  Searching
for 'distributed computing lisp' & 'parallel lisp' should yield
interesting hits.

-- 
Harvey Stein
Bloomberg LP
·······@bfr.co.il
From: George Neuner
Subject: Re: Lisp cluster
Date: 
Message-ID: <3abb935c.241439461@helice>
On 21 Mar 2001 08:48:08 -0500, ·······@bfr.co.il (Harvey J. Stein)
wrote:

> > You want to look for *lisp which was the lisp system that ran on the
> > ?TM1?.

"CM" = "Connection Machine".

I used a CM-2 in college.  I don't recall how *Lisp worked, I only
played with it briefly.  Most of my work on the machine was in *C
[ducking for cover ...]. 

The CM-1 and CM-2 weren't clusters, they were SIMD processors.  Up to
64K 1-bit CPUs in groups of 16 connected by a 12D hypercube data
network.  In the CM-2, each CPU group shared a 32-bit FPU.  The
processors were slaved to a front-end workstation which executed the
*language program and broadcast instructions to them through a
separate control network.  Conditional execution was achieved by
disabling processors that should not execute.

The CM-5 was a MIMD cluster with up to 1K nodes connected in a
hypertree [I forget the dimension].  Each node was a 32-bit SPARC with
four attached vector processors.

CM-3 and CM-4?  Never heard anything about them.  


I too am interested in how well a parallel Lisp could work in a SIMD
machine.  It seems to me that you would have great trouble doing
anything really interesting, such as applying a heterogenous vector of
functions to a vector of data.


George
From: Tim Bradshaw
Subject: Re: Lisp cluster
Date: 
Message-ID: <ey3vgoy6r9d.fsf@cley.com>
* George Neuner wrote:
> "CM" = "Connection Machine".

Yes, my braino, I meant CM-1 (but I may really have meant CM-2!)
There was a CM-something at Edinburgh, where I then was (latterly a
CM-5 I think, but I think there were earlier ones), but I never used
it.


> I too am interested in how well a parallel Lisp could work in a SIMD
> machine.  It seems to me that you would have great trouble doing
> anything really interesting, such as applying a heterogenous vector of
> functions to a vector of data.

I'm more interested I think in how you'd do stuff on the big
commercial machines which are all, I think, essentially MIMD, though
other classifications seem more useful (shared memory,
uniform/non-uniform memory access &c).  Really I want something which
will let me take a boring multithreaded CL program and scale it by a
factor of 30 or something.  (However fortunately our current
application shares basically no data so we can just run lots of
images!)

--tim
From: George Neuner
Subject: Re: Lisp cluster
Date: 
Message-ID: <3abf5caf.489567641@helice>
On 25 Mar 2001 12:07:42 +0100, Tim Bradshaw <···@cley.com> wrote:


>I'm more interested I think in how you'd do stuff on the big
>commercial machines which are all, I think, essentially MIMD, though
>other classifications seem more useful (shared memory,
>uniform/non-uniform memory access &c).  Really I want something which
>will let me take a boring multithreaded CL program and scale it by a
>factor of 30 or something.  (However fortunately our current
>application shares basically no data so we can just run lots of
>images!)

I would think Lisp *could* do well on any system that presented a
coherent shared virtual memory.  There has been some work in automatic
near clustering of the working set to reduce access delays on NUMA SVM
machines.  I experimented with this model on workstation clusters
about 8 years ago, but the network delays fetching remote VM pages
were too large to be useful.

[At the time, I was really pulling for the KSR1 which was, AFAIK, the
first commercial supercomputer to have coherent virtual shared memory
which was implemented through a dedicated hypertree network ... but
Kendell Square Research botched their accounting and folded before the
machine had time to catch on.]

Private memory systems scale cheaper than SVM, but are just a bitch to
program effectively.  Its mostly no different from programming a
cluster of independent workstations.  I have seen Fortan and C
compilers that attempt to automagically distribute statically sized
data structures and the code that references them, but IMO, none do
very well without programmer intervention.  I have not seen any that
even attempt to handle dynamic data structures.  Also, nothing I have
seen is portable - moving to a different node configuration always
seems to require recompiling.

From my [limited] experience, most programmers either accept static
distribution and recompile for each new configuration, or replicate
the software at each node and handle configuration with a machine
decription.


George
From: Tim Bradshaw
Subject: Re: Lisp cluster
Date: 
Message-ID: <ey3n1a86tnj.fsf@cley.com>
* George Neuner wrote:
> Private memory systems scale cheaper than SVM, but are just a bitch to
> program effectively.  Its mostly no different from programming a
> cluster of independent workstations.  I have seen Fortan and C
> compilers that attempt to automagically distribute statically sized
> data structures and the code that references them, but IMO, none do
> very well without programmer intervention.  I have not seen any that
> even attempt to handle dynamic data structures.  Also, nothing I have
> seen is portable - moving to a different node configuration always
> seems to require recompiling.

Right.  Actually, I think all pll systems (including heavily
multithreaded `conventional' programs) are `just a bitch' to program,
but shared memory is least painful.  I went to a talk once in the
parallel computing centre at Edinburgh the gist of which was that big
shared memory systems were going to clean up in the commercial
marketplace, and the dedicated pll supercomputers were dead, so if you
wanted to do high-performance stuff you'd better make it work on the
commercial boxes.  This was some time pre the pervasive big
shared-memory boxes so it was a bit more interesting then than it is
now!

(Of course by `shared memory' I mean, I guess, shared virtual memory,
I guess NUMA will win in the next  round of machines).

Back to Lisp, I suspect that the hard bit is getting memory
management, specifically GC, to scale reasonably well.

--tim
From: Rahul Jain
Subject: Re: Lisp cluster
Date: 
Message-ID: <99ps6b$f8r$1@joe.rice.edu>
On the subject of Lisp and concurrency, how about some GC ideas.
I'm not anywhere near an expert in any of this, so I may be way off.

With a generational GC, I thought that maybe there could be, not a
"list", but a "tree", of generations. Each of the youngest generations
would be associated with a thread or a group of threads working on most
of the same data. This way, you'd only need to block the threads that
need access to data in a single generation when it fills up and needs to
be cleaned. Other threads can continue processing as normal. I've only
read some simple explanations of how gen. GC works, so I don't know how
feasable this scheme is. Maybe the data referenced by one thread that's
in another thread's generation needs to be moved up to a generation the
two threads have in common?

-- 
-> -/-                       - Rahul Jain -                       -\- <-
-> -\- http://linux.rice.edu/~rahul -=- ·················@usa.net -/- <-
-> -/- "I never could get the hang of Thursdays." - HHGTTG by DNA -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
   Version 11.423.999.220020101.23.50110101.042
   (c)1996-2000, All rights reserved. Disclaimer available upon request.
From: George Neuner
Subject: Re: Lisp cluster
Date: 
Message-ID: <3ac0d1b5.585071227@helice>
On 26 Mar 2001 23:40:32 +0100, Tim Bradshaw <···@cley.com> wrote:

>(Of course by `shared memory' I mean, I guess, shared virtual memory,
>I guess NUMA will win in the next  round of machines).
>
>Back to Lisp, I suspect that the hard bit is getting memory
>management, specifically GC, to scale reasonably well.

With SVM all the CPUs see the same memory image, just as in a
conventional shared memory multiprocessor, so I would imagine that the
same techniques could be applied.  But I'm not up to date on
multiprocessor GC ... the last time I checked in there were still
significant problems dealing with tenured, shared data.


George
From: Petter Gustad
Subject: Re: Lisp cluster
Date: 
Message-ID: <878zlqs3nd.fsf@scintight.dolphinics.no>
·······@dyn.com (George Neuner) writes:

> Private memory systems scale cheaper than SVM, but are just a bitch to
> program effectively.  Its mostly no different from programming a

The price ratio between these systems seems to increase rapidly, i.e.
CC-NUMA systems of X nodes vs. a loosely non coherent cluster of e.g.
Linux PC with a fast interconnect. The reason for this could also be
that there are fewer vendors and larger margins in the CC-NUMA market,
e.g. the Data General CC-NUMA are using Intel based motherboards with
a SCI interconnect and coherent cache controller.

> cluster of independent workstations.  I have seen Fortan and C
> compilers that attempt to automagically distribute statically sized
> data structures and the code that references them, but IMO, none do
> very well without programmer intervention.  I have not seen any that
> even attempt to handle dynamic data structures.  Also, nothing I have
> seen is portable - moving to a different node configuration always
> seems to require recompiling.

This is were I was thinking lisp would have an advantage. To make
these decisions at compiletime is very complex. Lisp systems could
migrate evaluation and computation around the cluster. I know as some
pointed out that a remote eval is costly, however with a fast
interconnect and the low cost of a node in a cluster it could be an
advantage. It might not be a requirement to have a linear speedup.

I don't know if a MPI based lisp program would have any advantage over
a MPI based C program?

Petter
-- 
________________________________________________________________________
Petter Gustad       8'h2B | (~8'h2B) - Hamlet      http://www.gustad.com
#include <stdio.h>/* compile/run this program to get my email address */
int main(void) {printf ("petter\100gustad\056com\nmy opinions only\n");}
From: Bjørn Remseth
Subject: Re: Lisp cluster
Date: 
Message-ID: <m3itkunrey.fsf@snare.oslo.fast.no>
Petter Gustad <····@gustad.com> writes:

> I don't know if a MPI based lisp program would have any advantage over
> a MPI based C program?

Are there any MPI based Lisp programs?

                                                          (Rmz)

-- 
Bj�rn Remseth                             Mail:  ·············@fast.no
Systems Engineer                          Web:   http://www.fast.no/
Fast Search & Transfer ASA P.O. Box 1677  Vika NO-0120 Oslo, NORWAY
From: Petter Gustad
Subject: Re: Lisp cluster
Date: 
Message-ID: <87g0fyvvkd.fsf@scintight.dolphinics.no>
···@fast.no (Bj�rn Remseth) writes:

> Petter Gustad <····@gustad.com> writes:
> 
> > I don't know if a MPI based lisp program would have any advantage over
> > a MPI based C program?
> 
> Are there any MPI based Lisp programs?

I haven't seen any, but I have seen MPI lisp libraries somewhere - so
I assumed that somebody have actually used it.

Petter

-- 
________________________________________________________________________
Petter Gustad       8'h2B | (~8'h2B) - Hamlet      http://www.gustad.com
#include <stdio.h>/* compile/run this program to get my email address */
int main(void) {printf ("petter\100gustad\056com\nmy opinions only\n");}
From: George Neuner
Subject: Re: Lisp cluster
Date: 
Message-ID: <3ac2160a.668092315@helice>
On 28 Mar 2001 10:23:34 +0200, Petter Gustad <····@gustad.com> wrote:

>·······@dyn.com (George Neuner) writes:
>
>> Private memory systems scale cheaper than SVM, but are just a bitch to
>> program effectively.  Its mostly no different from programming a
>
>The price ratio between these systems seems to increase rapidly, i.e.
>CC-NUMA systems of X nodes vs. a loosely non coherent cluster of e.g.
>Linux PC with a fast interconnect. The reason for this could also be
>that there are fewer vendors and larger margins in the CC-NUMA market,
>e.g. the Data General CC-NUMA are using Intel based motherboards with
>a SCI interconnect and coherent cache controller.

The money is in the interconnect.  A strict coherence model with many
nodes requires aggregate terabits/sec and small network diameter -
usually achieved with high dimension hypertrees.  Looser coherence
models require less performance in the network, but more brain
performance from the programmer.


>> moving to a different node configuration always seems to require
>> recompiling.

>This is were I was thinking lisp would have an advantage. To make
>these decisions at compiletime is very complex. Lisp systems could
>migrate evaluation and computation around the cluster.

My own opinion is that the best way to program these beasts is with a
lazy, resolution bag model - all data and computations wrapped in
memoizing thunks, evaluated when necessary by whoever needs to and the
result returned to the bag.  This is, of course, much easier to do in
Lisp than in lesser languages.

YMMV.

George
From: Tim Bradshaw
Subject: Re: Lisp cluster
Date: 
Message-ID: <nkjvgoswh0g.fsf@tfeb.org>
Petter Gustad <····@gustad.com> writes:

> 
> The price ratio between these systems seems to increase rapidly, i.e.
> CC-NUMA systems of X nodes vs. a loosely non coherent cluster of e.g.
> Linux PC with a fast interconnect. The reason for this could also be
> that there are fewer vendors and larger margins in the CC-NUMA market,
> e.g. the Data General CC-NUMA are using Intel based motherboards with
> a SCI interconnect and coherent cache controller.
> 

As far as I know the thing that costs money is the interconnect.  If
you have a slow or long-latency interconnect you can build really
cheap machines and solve embarrassingly parallel problems pretty well.
But most of the interesting problems (both commercial and scientific)
have pretty hairy communication demands, so you end up spending a lot
of money on very fancy interconnect technology, so you can get memory
bandwidth to any node and decent latency.

For the commercial systems you are also spending a lot of money for
the ability to swap practically any component without the machine
failing, and the promise from the vendor that they'll turn up at short
notice at 3AM to do this.  That's a really important feature to many
people.

--tim
From: George Neuner
Subject: Re: Lisp cluster
Date: 
Message-ID: <3abb9fa6.244585355@helice>
Whoops!  Operator error.  Last message was a reply to Tim Bradshaw.

George
From: Sashank Varma
Subject: Re: Lisp cluster
Date: 
Message-ID: <sashank.varma-2103011118110001@129.59.212.53>
In article <···············@tfeb.org>, Tim Bradshaw <···@tfeb.org> wrote:

>You want to look for *lisp which was the lisp system that ran on the
>?TM1?.
>
>Please post anything interesting you find (on the whole area) or mail
>me as I'm interested in this stuff too!
>
>--tim

danny hillis's dissertation, published in 1985 as "the connection machine",
has a chapter or two on the cm's lisp dialect.

skef wholey has a chapter in peter lee's 1991 book "topics in advanced
language implementation" on implementing the special features of connection
machine lisp on top of common lisp (cltl1), so that one could experience
programming in this dialect using standard hardware.  i want to say that
this code was/is available at the cmu archive...

good luck.  i've always wanted to know more about this stuff as well;
all i know i gleaned from these sources.

sashank
From: Joe Marshall
Subject: Re: Lisp cluster
Date: 
Message-ID: <3dc79kh3.fsf@content-integrity.com>
Petter Gustad <····@gustad.com> writes:

> I'm looking for pointers to research papers regarding parallel lisp
> systems running on clusters.

Use your favorite search engine and check out Bert Halstead's
MultiLisp. 

You should also read Alan Bawden's PhD dissertation 
    ``Implementing Distributed Systems Using Linear Naming''
  previously titled
    ``Linear Graph Reduction: Confronting the Cost of Naming''

This latter is quite interesting.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Martin Cracauer
Subject: Re: Lisp cluster
Date: 
Message-ID: <99adrm$r8u$1@counter.bik-gmbh.de>
Petter Gustad <····@gustad.com> writes:

>* Lisp has an advantage over most other languages, i.e. it can do a
>  simple remote-eval (rpc,rmi) on a different node in the cluster.

In my opinion, that is not a real advantage, since the Lisp reader and
other bottlenecks would be involved.

Any cluster done for performance reasons must use a tight network
protocol with contents specified and coded by the bit.  Same problem
with CORBA, it is not used as often in high-performance environments
as people thought.

Having said this, to code something to the bit, I still find Lisp very
good.

>* CLOS object instances could live and migrate through the nodes in
>  the cluster. A scheduler could issue an eval where the communication
>  overhead would be low.

Only in rare cases would that be worthwhile.

And I have to add that each time I thought, ok you can accept overhead
in this and that, I found that optimizations at the former bottlenecks
moved many of these areas into the performance focus.  You never know
in advance, especially if you are not the only one using the software.

>* Thinking Machines were quite focused on lisp, they must have done
>  some extensive work in this field. Steele was working for TM if
>  memory serves me right.

Yes, but if you look into it (see Tim's suggestions), you see that not
the mechanisms you suggest were used.

The flexibility of low-level Lisp principles (syntax etc.) was used to
make the most convinient way to express things that fit the underlying
machine.  It was not that tool (high-level) features of Lisp like the
reader or reflection were used.

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/
From: Marco Antoniotti
Subject: Re: Lisp cluster
Date: 
Message-ID: <y6cy9tzjhie.fsf@octagon.mrl.nyu.edu>
········@counter.bik-gmbh.de (Martin Cracauer) writes:

> Petter Gustad <····@gustad.com> writes:
> 
> >* Lisp has an advantage over most other languages, i.e. it can do a
> >  simple remote-eval (rpc,rmi) on a different node in the cluster.
> 
> In my opinion, that is not a real advantage, since the Lisp reader and
> other bottlenecks would be involved.
> 
> Any cluster done for performance reasons must use a tight network
> protocol with contents specified and coded by the bit.  Same problem
> with CORBA, it is not used as often in high-performance environments
> as people thought.
> 
> Having said this, to code something to the bit, I still find Lisp very
> good.

We have installed a Scyld Beowulf cluster in our back room.  CMUCL
runs happily on it.  In this specific case, after a bit of
investigation about how to use the system, I think that the best
avenue to follow would be to provide a MPI portable interface with a
backend optimized for a specific platform.

Just a thought.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group	tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA			http://bioinformatics.cat.nyu.edu
             Like DNA, such a language [Lisp] does not go out of style.
			      Paul Graham, ANSI Common Lisp
From: Tim Bradshaw
Subject: Re: Lisp cluster
Date: 
Message-ID: <nkjwv9heuaf.fsf@tfeb.org>
Erik Naggum <····@naggum.net> writes:

> 
>   Why must the reader be a bottleneck?
> 
>   What would you need to make the reader better than bit fiddling?
> 

I was going to post something along the same lines.  I had a bad
experience (in the early 90s) trying to use READ to read what seemed
then like a lot of data, and after that I used to whine for *years*
about slow performance from READ.  Within the last couple of years I
did some stuff which I `prototyped' using READ/PRINT, and discovered
that it was just easily fast enough.  There are still a couple of
places where (for implementations I've used in the last few years),
READ is slow -- reading a lot of floats has been one -- but in general
you can get really quite good performance.

>   Designing fast binary protocols is _incredibly_ hard work, and it only
>   works well for a very limited set of solutions.

Oh yes.  And badly-implemented ones often end up *much* larger than
sending the equivalent text.

--tim
From: ·····@cs.uit.no
Subject: Re: Lisp cluster
Date: 
Message-ID: <hv1yrpycp9.fsf@johnmnb.cs.uit.no>
Tim Bradshaw <···@tfeb.org> writes:

> >   Designing fast binary protocols is _incredibly_ hard work, and it only
> >   works well for a very limited set of solutions.
> 
> Oh yes.  And badly-implemented ones often end up *much* larger than
> sending the equivalent text.

I'm using a combination. A binary protocol for operations which most
of the time just move binary data structures around without looking at
the contents (or for operations which _do_ look at the contents, but
only need very simple parsing), and a lisp-based (or python based for
the python version) protocol for the rest.

This means that I don't have to spend much time on the protocol side
for handling more complex information such as multi-clusters with
varying configurations (2-8 way systems, firewalls, long-distance
communication[1], different protocols, thread models, lack of
connectivity etc) and that it was relatively simple to add code for
handling things such as caching, hierarchial global reductions with
partial evaluations etc.

1. One of the clusters is in another country, some 2000 km away. 
   Adds a lot to the latency of global synchronization, but I bought 
   back a lot of that by being able to configure the communication 
   paths and mechanims dynamically. 

-- 
	// John Markus Bj�rndalen
From: ········@hex.net
Subject: Re: Lisp cluster
Date: 
Message-ID: <QAqu6.1253$hU1.302864@news4.aus1.giganews.com>
Erik Naggum <····@naggum.net> writes:
> CORBA is actually a really good example of why it fails to use
> binary interchange "languages" in more than a very restricted
> domain.

Could you elaborate on this?  I'm not sure I follow what precise
failure you are indicating.
-- 
(concatenate 'string "cbbrowne" ·@acm.org")
http://www.ntlug.org/~cbbrowne/rdbms.html
To iterate is human; to recurse, divine.
From: Jeff Greif
Subject: Re: Lisp cluster
Date: 
Message-ID: <Hf7u6.156$NC.4423@dca1-nnrp2.news.digex.net>
There was also a thesis (published by MIT Press later) called something like
Parallation Lisp by G. Sabot (not sure of spelling).

There were also some papers on QLisp by Bob Gabriel and some others.  This
discussed issues like parallel evaluation, futures, etc.

Sorry, I don't have the references at hand.

Jeff

"Petter Gustad" <····@gustad.com> wrote in message
···················@scintight.dolphinics.no...
>
> I'm looking for pointers to research papers regarding parallel lisp
> systems running on clusters. I tried searching on the net as well as
> some IEEE proceedings without finding very much of interest. Maybe
> the lisp research community did this 30-40 years ago so the papers are
> only found in old research journals, or carved out in stone (image
> from an old Byte cover with lisp code written on a stone on the moon
From: Paolo Amoroso
Subject: Re: Lisp cluster
Date: 
Message-ID: <KBu5OuS=UTqnLhxlZZ6KAqhEkLVs@4ax.com>
On 21 Mar 2001 08:59:49 +0100, Petter Gustad <····@gustad.com> wrote:

> I'm looking for pointers to research papers regarding parallel lisp
> systems running on clusters. I tried searching on the net as well as

You may check this paper:

  "NetCLOS and Parallel Abstractions - Actor and Structure Oriented 
  Programming on Workstation Clusters with Common Lisp"
  Lothar Hotz and Michael Trowe
  Email: hotz AT informatik DOT uni-hamburg DOT de
  Proceedings of the European Lisp User Group Meeting '99

  Abstract:
  In this paper, we describe an extension of Common Lisp which allows the 
  definition of parallel programs within that functional and 
  object-oriented language. In particular, the extensions are the 
  introducing of active objects, sending synchronous and asynchronous 
  messages between them, automatic and manual distribution of active 
  objects to object spaces, and transparent object managing. With these 
  extensions, object-oriented parallel programming on a workstation cluster
  using different Common Lisp images is possible. These concepts are 
  implemented as an extension of Allegro Common Lisp subsumed by the name 
  NetCLOS. Furthermore, it is shown how NetCLOS can be used to realize 
  parallel abstractions for implementing parallel AI methods at a highly 
  abstract level.

To get a copy of the paper you may look for it with your favorite search
engine, purchase a copy of the proceedings from Franz Inc., or contact the
authors.

There is another potentially interesting reference to some conference
proceedings, but I don't have it handy. If I don't post it within a couple
of days, feel free to remind me.


> * Thinking Machines were quite focused on lisp, they must have done
>   some extensive work in this field. Steele was working for TM if
>   memory serves me right.

If you are interested, you can get the *LISP (the Lisp implementation for
the Connection Machine) simulator for CMU CL at:

  ftp://ftp.csl.sri.com/pub/users/gilham/starlisp.tar.gz


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Paolo Amoroso
Subject: Re: Lisp cluster
Date: 
Message-ID: <JNO5OgvHUvjcD0icEKQm7AOO4lVM@4ax.com>
On Wed, 21 Mar 2001 22:38:30 +0100, Paolo Amoroso <·······@mclink.it>
wrote:

> There is another potentially interesting reference to some conference
> proceedings, but I don't have it handy. If I don't post it within a couple
> of days, feel free to remind me.

Here is it:

  "Parallel Lisp. Languages and Systems - US/Japan Workshop on Parallel
  Lisp. Proceedings"
  Lecture Notes in Computer Science, 441 - Springer Verlag, 1990
  ISBN 3-540-52782-6

I include below the table of contents, which also lists authors.


Paolo

------------------------------------------------------------------------------
1) continuing into the future. on the interaction of futures and
first_class continuations, katz, mweise, d

2) speculative computation in multilisp, osborne, rb

3) garbage collection in multischeme, miller, jsepstein, bs

4) qlisp: an interim report, goldman, rgabriel, rpsexton, c

5) low_cost process creation and dynamic partitioning in qlisp, pehoushek,
jdweening, js

6) new ideas in parallel lisp. languages design, implementation, and
programming tools, halstead, rh

7) concurrent scheme, kessler, rrswanson, mr

8) the design of automatic parallelizers for symbolic and numeric programs,
harrison, wlammarguellat, z

9) a reflective object oriented concurrent language abcl/r, yonezawa, a

10) optimistic and pessimistic synchronization in distributed computing,
shibayama, eyonezawa, a

11) toward a new computing model for an open distributed environment,
tokoro, m

12) concurrent programming in tao: practice and experience, takeuchi, i

13) a pseudo network approach to inter_processor communication on a
shared_memory multi_processor macelis, murakami, k

14) mul_t: a high_performance parallel lisp, kranz, dahalstead, rhmohr, e

15) integrating parallel lisp with modern unix_based operating systems,
pierson, dl

16) mutilisp: a lisp dialect for parallel processing, iwasaki, h

17) pm1 and pmlisp. an experimental machine and its lisp system for
research on mimd massively parallel computation, yuasa, tkawana, t

18 design of the shared memory system for multi_processor lisp machines and
its implementation on the evlis machine, yasui, hsakaguchi, tkudo,
khironishi, n

19 top_1 multiprocessor workstation, suzuki, n

20 a parallel lisp language pailisp and its kernel specification, ito,
tmatsui, m
------------------------------------------------------------------------------


-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Arun Welch
Subject: Re: Lisp cluster
Date: 
Message-ID: <QEeu6.17869$Im6.1895989@newsread1.prod.itd.earthlink.net>
"Petter Gustad" <····@gustad.com> wrote in message
···················@scintight.dolphinics.no...
>
> I'm looking for pointers to research papers regarding parallel lisp
> systems running on clusters.

The BBN Butterfly had a CLtL1 on it called Butterfly Lisp, which used
futures as it's parallelism construct. I did a port of PCL to it, but I'm
pretty sure the code is long gone (as are all the Butterflies). Butterfly CL
was built on top of Butterfly Scheme, if memory serves.

...arun
From: Pierre R. Mai
Subject: Re: Lisp cluster
Date: 
Message-ID: <87itl2kklh.fsf@orion.bln.pmsf.de>
"Arun Welch" <·····@remove-anzus.com> writes:

> The BBN Butterfly had a CLtL1 on it called Butterfly Lisp, which used
> futures as it's parallelism construct. I did a port of PCL to it, but I'm
> pretty sure the code is long gone (as are all the Butterflies). Butterfly CL
> was built on top of Butterfly Scheme, if memory serves.

Hmm, aren't (some of the) sources to the BBN CL implementation
available at the CMU Lisp Repository?

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein