From: Joe Konstan
Subject: Summary of Responses on Parallel Lisp Systems
Date:
Message-ID: <42929@ucbvax.BERKELEY.EDU>
It seems that responses have stopped trickling in, so here is the promised
summary of responses on parallel common lisps. These have been heavily
edited to keep down your reading volume. Any mistakes added in are mine, if
someone would like the complete file, please send me E-mail.
----------------------------------------------------------------------------
From: Arun Welch <·····@cis.ohio-state.edu>
Which lisp?
Butterfly Common Lisp, based on Butterfly Scheme, based on MultiScheme.
Platform?
BBN Butterfly.
Programming model?
Futures.
Comments:
Porting PCL to this beast was a *bitch*, mostly based on the
architecture of the lisp.
----------------------------------------------------------------------------
From: ············@cs.utah.edu (Robert R. Kessler)
Which lisp?
We use Concurrent Scheme, our own design. It turns out that our Scheme
is implemented on top of a Common Lisp subset, and thus the concurrency
model is implemented in CL. You could program in CL, but you have to give
up some nasties -- fluid variables, explicit calls to eval, etc.
Platform?
We mainly use networked HP workstations. We also have been building the
Mayfly, a distributed memory parallel computer, which kind of runs. We
generally debug on the networked systems and then move to the Mayfly
for speed.
Programming model?
Something called Domains. Domains are things within which there is mutual
exclusion and data sharing. Across domains, there is data copying and
multiple threads. We have papers if you really want to know more. We
also have DPOS, which is a parallel programming system that uses graphics
to specify the parallelism, straight sequential code for doing the
computation.
Comments
Parallel programming in general is hard.
----------------------------------------------------------------------------
From: ····························@uunet.UU.NET (Todd Kaufmann)
Which Lisp?
Allegro CLiP (Common Lisp in Parallel).
I've also used the multiprocessing capability of Allegro which works on
single processors. Lucid has the same functions; they both come from the
stack-group model of symbolics.
Platform?
Sequent Symmetry
Programming model?
CLiP provides multilisp, spurlisp, and qlisp models, and users are free to
mix and match features.
Comments?
Not yet really, I've just started using it. Mostly I'm working on tools to
help with debugging at the application level: monitoring and profiling
code, on a per-function/process basis; understanding the process structure
(communications between, and forking/joining of processes).
One of the complications is that a process may change physical processors,
and though you can tell which, get the process run time at the beginning
from one processor and then taking the run time at completion from another
processor gives screwy results.
----------------------------------------------------------------------------
From: ·······@NMSU.Edu
Which Lisp?
Allegro CLiP
Platform?
20 processor Sequent Symmetry
Comments?
Apparently, we were amoung the first 'large' sequent sites for CLIP
and we did have a few problems initially. As the system administrator,
I spent a great deal of time working with the Franz people on the
problems we found. I have been pleased with the response and access to
the CLIP 'guru'.
To be fair, a number of the `problems` with CLIP were, in fact,
configuration related. We are a large SUN shop where remote file
systems are a way of life. This caused specific CLIP actions to take
an unreasonably long period of time. In particular this affects light
weight processes. The `solution` is to provide a large enough
partition on the host machine to be used as the virtual shared memory
space. (this is the sequent implementation feature and may not be a
problem on other hardware.)
We also had problems with versions of loaders that are compatible with
the CLIP foreign function procedures. We restored an older loader for
CLIP and have had not further problems with foreign function loads.
I have been examining the overhead associated with LWPs. I am
reasonable convinced that a great deal of the overhead is the
underlying implementation and not CLIP. I was surprised by this. These
tests do indicate the granularity is best at the 'matrix' level rather
than the 'row or column' level.
CLIP supports four parallel models: spurlisp, qlisp, multilisp and
their own parallel-processing package. The first three appear to be
implemented using the fourth and all have the characteristic features
you would expect. Parallelizing (sp?) code is very straightforward in
all of these packages.
Lastly, I brought up Starlisp under CLIP and I have had good success
passing lisp objects to and from C code. I can run parallel lisp
processes that call C procedures which spawn parallel processes which
access C and lisp objects. This does tend to require a good deal of
disk space for the virtual shared memory (70-100K per lwp), but is a
nice environment to work in.
---------------------------------------------------------------------
Thanks to all who responded (including a couple of responses which I
omitted from the summary due to redundancy or lack of relevance to
parallel programming in lisp.
Also, I've been looking at TopLevel Lisp which runs on several platforms
including 386 and 88K.
Briefly, TopLevel provides a futures-based model with three granularities of
process (from threads that take ~3 function calls to start to processes with
many tens of thousands of function calls overhead).
Joe Konstan
In article <·····@ucbvax.BERKELEY.EDU> ·······@elmer-fudd.berkeley.edu (Joe Konstan) writes:
>It seems that responses have stopped trickling in, so here is the promised
>summary of responses on parallel common lisps...
Has anybody heard of any LISPs that run on the Transputer?
--
Jay Nelson (TRW) ···@wilbur.coyote.trw.com