From: Mike G.
Subject: Parallel Lisp
Date: 
Message-ID: <1189002231.167450.110710@k79g2000hse.googlegroups.com>
I've spent a little free time over the last two months building a
multi-processor / multi-core network computer in my basement. In no
time at all, I've been able to hack up some primitives for doing
parallel processing across this network in SBCL.

My question is: can someone provide some names of parallel lisp
environments? I know of *Lisp. Anything else? I've been trying to
scare up a copy of the *Lisp Dictionary.

Ultimately, I'm looking for parallel Lisp idioms to implement on top
of my primitives.

Right now, I have a macro plet which has the form:

(plet ((var1 (foo1))
        (var2 (foo2))
         ...)
    (exit-form)
  (form1)
  (form2)
   ...)

The idea is that the variables get initialized (as per an ordinary
let), each form in the body gets executed in parallel, and exit-form
is a chunk of code that executes after all parallel threads have ended
which ties everything together.  This macro expands into my primitives
to initialize / access variables via my centralized tracker and a loop
which patiently waits for the threads to complete. Works well enough.

I also have a parallel MAP which splits a map up across multiple
cpu's / cores.

Implementing this is what made me come looking for guidance because I
can think of several ways to split up a MAP that might make sense
under different circumstances.

Any ideas or pointers area welcome.
-M

From: Vebjorn Ljosa
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189003111.686860.177330@d55g2000hsg.googlegroups.com>
On Sep 5, 10:23 am, "Mike G." <···············@gmail.com> wrote:
> My question is: can someone provide some names of parallel lisp
> environments? I know of *Lisp. Anything else? I've been trying to
> scare up a copy of the *Lisp Dictionary.

If you can dig up a copy of the _Proceedings of the US/Japan workshop
on Parallel Lisp on Parallel Lisp: languages and systems_ (June 1990,
Sendai, Japan), you might find something of interest in there.  For
instance, "A parallel Lisp language PaiLisp and its kernel
specification" by T. Ito and M. Matsui.  (I haven't read it.)

Vebjorn
From: anon
Subject: Re: Parallel Lisp
Date: 
Message-ID: <KzADi.493183$p47.414289@bgtnsc04-news.ops.worldnet.att.net>
Sometime between 1995 and 2005 (last time I checked) all parallel and 
concurrent languages seam to have disappeared. Now, that the average 
person can buy or build a multi-processor system, we needs these 
languages but for some reason they are no where to be found. 

I know of only one parallel language that is non-lisp called "Sisal". It 
is still on the net but is no longer supported by LLNL. It automatic 
creates parallels code for the user.

Just found an older parallel version of *lisp  or StarLisp simulator which 
may be downloaded at bottom of the following archive URL page but it 
was created back in 1989.  	
     File Name = Starsim-f20.zip   File size = 580 KB

  http://examples.franz.com/index.html

  "*Lisp (pronounced "star Lisp") is a data parallel extension of the 
Common Lisp programming language, designed for the Connection 
Machine system." 



In <························@k79g2000hse.googlegroups.com>,  "Mike G." <···············@gmail.com> writes:
>I've spent a little free time over the last two months building a
>multi-processor / multi-core network computer in my basement. In no
>time at all, I've been able to hack up some primitives for doing
>parallel processing across this network in SBCL.
>
>My question is: can someone provide some names of parallel lisp
>environments? I know of *Lisp. Anything else? I've been trying to
>scare up a copy of the *Lisp Dictionary.
>
>Ultimately, I'm looking for parallel Lisp idioms to implement on top
>of my primitives.
>
>Right now, I have a macro plet which has the form:
>
>(plet ((var1 (foo1))
>        (var2 (foo2))
>         ...)
>    (exit-form)
>  (form1)
>  (form2)
>   ...)
>
>The idea is that the variables get initialized (as per an ordinary
>let), each form in the body gets executed in parallel, and exit-form
>is a chunk of code that executes after all parallel threads have ended
>which ties everything together.  This macro expands into my primitives
>to initialize / access variables via my centralized tracker and a loop
>which patiently waits for the threads to complete. Works well enough.
>
>I also have a parallel MAP which splits a map up across multiple
>cpu's / cores.
>
>Implementing this is what made me come looking for guidance because I
>can think of several ways to split up a MAP that might make sense
>under different circumstances.
>
>Any ideas or pointers area welcome.
>-M
>
From: Mike G.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189011291.246339.221510@o80g2000hse.googlegroups.com>
On Sep 5, 12:12 pm, ····@anon.org (anon) wrote:
> Sometime between 1995 and 2005 (last time I checked) all parallel and
> concurrent languages seam to have disappeared. Now, that the average
> person can buy or build a multi-processor system, we needs these
> languages but for some reason they are no where to be found.

Well, parallel programming languages have always been in something of
a funk.
MAYBE we'll see this situation change in the next 5 years or so. I
wouldn't hold my
breath, though.

Don't expect it to be a GOOD language, though (unless you roll your
own w/ Lisp, like
me :) .. chances are good that the solution everyone raves about will
be some foul
abomination build on top of .NET RPC. <shudder>

The other thing to think about here is that a distributed network
computing requires
very different tools than your typical shared memory multicomputer.

> I know of only one parallel language that is non-lisp called "Sisal". It
> is still on the net but is no longer supported by LLNL. It automatic
> creates parallels code for the user.

Yuck. Whats the fun in that? I built my system so that I could write
parallel algorithms,
not have a compiler make a hash of it for me :)

>
> Just found an older parallel version of *lisp  or StarLisp simulator which
> may be downloaded at bottom of the following archive URL page but it
> was created back in 1989.
>      File Name = Starsim-f20.zip   File size = 580 KB
>
>  http://examples.franz.com/index.html
>
>   "*Lisp (pronounced "star Lisp") is a data parallel extension of the
> Common Lisp programming language, designed for the Connection
> Machine system."
>

Thats not a parallel Lisp. It is a simulator (running on serial lisp)
for
*Lisp, the CM lisp. I am investigating the worth in making a *Lisp
simulator
on top of my parallel primitives.

But how much CM *lisp code is out there to worry about? Probably not
much.
Maybe I could find an example or two in an academic paper that would
make for a
nice test case, but thats about it.
From: anon
Subject: Re: Parallel Lisp
Date: 
Message-ID: <%WHDi.70068$ax1.54388@bgtnsc05-news.ops.worldnet.att.net>
Wel, back between the 1970 to mid 1990  parallel research was done in 
coleges and universities and you could find penty of parallel compilers 
or language system
In <························@o80g2000hse.googlegroups.com>,  "Mike G." <···············@gmail.com> writes:
>On Sep 5, 12:12 pm, ····@anon.org (anon) wrote:
>> Sometime between 1995 and 2005 (last time I checked) all parallel and
>> concurrent languages seam to have disappeared. Now, that the average
>> person can buy or build a multi-processor system, we needs these
>> languages but for some reason they are no where to be found.
>
>Well, parallel programming languages have always been in something of
>a funk.
>MAYBE we'll see this situation change in the next 5 years or so. I
>wouldn't hold my
>breath, though.
>
>Don't expect it to be a GOOD language, though (unless you roll your
>own w/ Lisp, like
>me :) .. chances are good that the solution everyone raves about will
>be some foul
>abomination build on top of .NET RPC. <shudder>
>
>The other thing to think about here is that a distributed network
>computing requires
>very different tools than your typical shared memory multicomputer.
>
>> I know of only one parallel language that is non-lisp called "Sisal". It
>> is still on the net but is no longer supported by LLNL. It automatic
>> creates parallels code for the user.
>
>Yuck. Whats the fun in that? I built my system so that I could write
>parallel algorithms,
>not have a compiler make a hash of it for me :)
>
>>
>> Just found an older parallel version of *lisp  or StarLisp simulator which
>> may be downloaded at bottom of the following archive URL page but it
>> was created back in 1989.
>>      File Name = Starsim-f20.zip   File size = 580 KB
>>
>>  http://examples.franz.com/index.html
>>
>>   "*Lisp (pronounced "star Lisp") is a data parallel extension of the
>> Common Lisp programming language, designed for the Connection
>> Machine system."
>>
>
>Thats not a parallel Lisp. It is a simulator (running on serial lisp)
>for
>*Lisp, the CM lisp. I am investigating the worth in making a *Lisp
>simulator
>on top of my parallel primitives.
>
>But how much CM *lisp code is out there to worry about? Probably not
>much.
>Maybe I could find an example or two in an academic paper that would
>make for a
>nice test case, but thats about it.
>
From: anon
Subject: Re: Parallel Lisp
Date: 
Message-ID: <rtIDi.70145$ax1.46006@bgtnsc05-news.ops.worldnet.att.net>
Sorry, about that! hit the wrong buttom in my last post.

Well, back between the 1970 to mid 1990  parallel research was done in 
coleges and universities and you could find penty of parallel compilers 
or language system. But when the money dried up, so do the research and 
compilers.  Papers, can still be found but you might need to be a paid ACM 
or some other group member just to read them.

Now, since, parallel system aka servers have been around over 10 years 
you would think that we could find a couple of compilers but again that's 
a no. So, I would not look for Microsoft's or its .NET to help.

Even with SGI/SUN/IBM parallel (128+ processors) blade servers/systems 
that they sale to NASA or other companies, the compiler are too high priced 
for the average user.  

Just an Idea! You could get StarLisp (*Lisp) and Sisal and play with 
them, may be modify *Lisp to become a true parallel system. Then you 
could make the big bucks.

But what I think is going to happen is someone has a downloaded copy 
of some of those languages and when they have the time they will adapt 
them for the micro multi-processor system and make a killing when they 
release them.

As for Erlisp. Well I download a shapshot copy. But the package is too 
small only 9K. So, I think it is another simulator at the movement.


In <························@o80g2000hse.googlegroups.com>,  "Mike G." <···············@gmail.com> writes:
>On Sep 5, 12:12 pm, ····@anon.org (anon) wrote:
>> Sometime between 1995 and 2005 (last time I checked) all parallel and
>> concurrent languages seam to have disappeared. Now, that the average
>> person can buy or build a multi-processor system, we needs these
>> languages but for some reason they are no where to be found.
>
>Well, parallel programming languages have always been in something of
>a funk.
>MAYBE we'll see this situation change in the next 5 years or so. I
>wouldn't hold my
>breath, though.
>
>Don't expect it to be a GOOD language, though (unless you roll your
>own w/ Lisp, like
>me :) .. chances are good that the solution everyone raves about will
>be some foul
>abomination build on top of .NET RPC. <shudder>
>
>The other thing to think about here is that a distributed network
>computing requires
>very different tools than your typical shared memory multicomputer.
>
>> I know of only one parallel language that is non-lisp called "Sisal". It
>> is still on the net but is no longer supported by LLNL. It automatic
>> creates parallels code for the user.
>
>Yuck. Whats the fun in that? I built my system so that I could write
>parallel algorithms,
>not have a compiler make a hash of it for me :)
>
>>
>> Just found an older parallel version of *lisp  or StarLisp simulator which
>> may be downloaded at bottom of the following archive URL page but it
>> was created back in 1989.
>>      File Name = Starsim-f20.zip   File size = 580 KB
>>
>>  http://examples.franz.com/index.html
>>
>>   "*Lisp (pronounced "star Lisp") is a data parallel extension of the
>> Common Lisp programming language, designed for the Connection
>> Machine system."
>>
>
>Thats not a parallel Lisp. It is a simulator (running on serial lisp)
>for
>*Lisp, the CM lisp. I am investigating the worth in making a *Lisp
>simulator
>on top of my parallel primitives.
>
>But how much CM *lisp code is out there to worry about? Probably not
>much.
>Maybe I could find an example or two in an academic paper that would
>make for a
>nice test case, but thats about it.
>
From: Mark H.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189116074.001708.165920@y42g2000hsy.googlegroups.com>
On Sep 5, 9:54 am, "Mike G." <···············@gmail.com> wrote:
> Well, parallel programming languages have always been in something of
> a funk.

This is changing now, I think, mostly because of the multicore thing
and GPU's.  Intel (Ct and other libraries) and NVidia (CUDA) each have
their own parallel languages and language extensions coming out.

> > I know of only one parallel language that is non-lisp called "Sisal". It
> > is still on the net but is no longer supported by LLNL. It automatic
> > creates parallels code for the user.

There are gobs and gobs of non-Lisp parallel languages out there.  A
lot of them didn't survive longer than a PhD thesis, but some are used
regularly by small communities, enough for companies to justify
investing effort in developing optimizing compilers, despite the
existence of academic optimizing compilers that are pretty good.  UPC
(Unified Parallel C) in particular comes to mind.  Cilk has been
around for a while and is quite good for shared-memory environments; I
don't think they have a cluster back-end, at least not yet.

Erlang is a great language, but I don't think they have a good cluster
backend yet.  Their shared-memory backend is fairly recent (it's
covered in an Erlang book that came out this year), and their
distributed-memory backend still relies on TCP/IP (rather than going
over MPI or GASNet and thus exploiting specialized high-bandwidth, low-
latency cluster network hardware).  This was a logical choice for
Erlang's original purpose in telecommunications.  Hopefully porting it
to use MPI or GASNet won't be too hard...

I think someone is working on an Erlang-like extension to CL, called
"ErLisp."  I'm not sure whether that project is active.

mfh
From: Kjetil
Subject: Re: Parallel Lisp
Date: 
Message-ID: <46dee070$1@news.broadpark.no>
anon wrote:
> Sometime between 1995 and 2005 (last time I checked) all parallel and 
> concurrent languages seam to have disappeared. Now, that the average 
> person can buy or build a multi-processor system, we needs these 
> languages but for some reason they are no where to be found. 
> 
> I know of only one parallel language that is non-lisp called "Sisal". It 
> is still on the net but is no longer supported by LLNL. It automatic 
> creates parallels code for the user.
> 

Does Erlang qualify?


-Kjetil
From: Scott Burson
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189036923.667024.45110@22g2000hsm.googlegroups.com>
On Sep 5, 9:59 am, Kjetil <····@no.no> wrote:
> Does Erlang qualify?

And does anyone here have any experience with Erlisp?

http://common-lisp.net/project/erlisp/

(Just asking -- I haven't tried it.)

-- Scott
From: Nicolas Neuss
Subject: Re: Parallel Lisp
Date: 
Message-ID: <87myvzszd5.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Scott Burson <········@gmail.com> writes:

> On Sep 5, 9:59 am, Kjetil <····@no.no> wrote:
> > Does Erlang qualify?
> 
> And does anyone here have any experience with Erlisp?
> 
> http://common-lisp.net/project/erlisp/
> 
> (Just asking -- I haven't tried it.)
> 
> -- Scott

Since noone else feels inclined to answer, I'll have a go for it: I tried
Erlisp and the examples worked nice for me.  It looks like a nice language
if you have a problem which can be mapped to interacting processes (e.g. a
game of multiple players).  For my application domain (numerics of partial
differential equations), I am more interested in parallelization for
efficiency (i.e. exploting multiple processors).  At the moment, I am
parallelizing on a shared-memory basis, with quite a lot of
"communication".  Erlisp is not (yet?) adequate here, because it
communicates using strings, and because it does not work for distributed
memory architectures.  However, I think that it might be a rather good
interface also in this situation, if it is extended appropriately (probably
using MPI under the hoods).

Nicolas
From: Nicolas Neuss
Subject: Re: Parallel Lisp
Date: 
Message-ID: <878x7iuaus.fsf@ma-patru.mathematik.uni-karlsruhe.de>
Sorry, forget this post.  I had tested CL-MUPROC, not Erlisp.

Nicolas

Nicolas Neuss <········@mathematik.uni-karlsruhe.de> writes:

> Scott Burson <········@gmail.com> writes:
> 
> > On Sep 5, 9:59 am, Kjetil <····@no.no> wrote:
> > > Does Erlang qualify?
> > 
> > And does anyone here have any experience with Erlisp?
> > 
> > http://common-lisp.net/project/erlisp/
> > 
> > (Just asking -- I haven't tried it.)
> > 
> > -- Scott
> 
> Since noone else feels inclined to answer, I'll have a go for it: I tried
> Erlisp and the examples worked nice for me.  It looks like a nice language
> if you have a problem which can be mapped to interacting processes (e.g. a
> game of multiple players).  For my application domain (numerics of partial
> differential equations), I am more interested in parallelization for
> efficiency (i.e. exploting multiple processors).  At the moment, I am
> parallelizing on a shared-memory basis, with quite a lot of
> "communication".  Erlisp is not (yet?) adequate here, because it
> communicates using strings, and because it does not work for distributed
> memory architectures.  However, I think that it might be a rather good
> interface also in this situation, if it is extended appropriately (probably
> using MPI under the hoods).
> 
> Nicolas
From: D Herring
Subject: Re: Parallel Lisp
Date: 
Message-ID: <v5mdndIbNJ9dHkLbnZ2dnUVZ_o6knZ2d@comcast.com>
Kjetil wrote:
> anon wrote:
>> Sometime between 1995 and 2005 (last time I checked) all parallel and 
>> concurrent languages seam to have disappeared. Now, that the average 
>> person can buy or build a multi-processor system, we needs these 
>> languages but for some reason they are no where to be found.
>> I know of only one parallel language that is non-lisp called "Sisal". 
>> It is still on the net but is no longer supported by LLNL. It 
>> automatic creates parallels code for the user.
> 
> Does Erlang qualify?

There's also Mozart/Oz: http://www.mozart-oz.org/
From: Alexander Schreiber
Subject: Re: Parallel Lisp
Date: 
Message-ID: <slrnfeb7ar.pbl.als@mordor.angband.thangorodrim.de>
anon <····@anon.org> wrote:
> Sometime between 1995 and 2005 (last time I checked) all parallel and 
> concurrent languages seam to have disappeared. Now, that the average 
> person can buy or build a multi-processor system, we needs these 
> languages but for some reason they are no where to be found. 

Hmm, Occam is still alive, with at least two freely available compilers:
 - KROC (Kent Retargetable Occam Compiler):
   http://www.cs.kent.ac.uk/projects/ofa/kroc/
 - The Amsterdam Compiler Kit also appears to contain an Occam compiler
   http://tack.sourceforge.net/

Occam was designed as the native programming language for transputer
systems (inherently parallel to massive parallel architecture).

There is also ADA, which has multiprocessing as part of the language
spec, with GNAT as a free compiler environment:
http://www.gnu.org/software/gnat/gnat.html

One can also do parallel programming in Lisp, preferable one that maps Lisp
threads to operating system threads. As far as I'm aware, the only
freely available Lisp for x86 that does this is SBCL - where it works
nicely as the previous poster already wrote.

> I know of only one parallel language that is non-lisp called "Sisal". It 
> is still on the net but is no longer supported by LLNL. It automatic 
> creates parallels code for the user.

I am rather wary of such claims - typical code not explicitly designed for
parallel execution tends not to win much by automagic parallelisation.

> Just found an older parallel version of *lisp  or StarLisp simulator which 
> may be downloaded at bottom of the following archive URL page but it 
> was created back in 1989.  	
>      File Name = Starsim-f20.zip   File size = 580 KB
>
>   http://examples.franz.com/index.html
>
>   "*Lisp (pronounced "star Lisp") is a data parallel extension of the 
> Common Lisp programming language, designed for the Connection 
> Machine system." 
>
>
>
> In <························@k79g2000hse.googlegroups.com>,  "Mike G." <···············@gmail.com> writes:
>>I've spent a little free time over the last two months building a
>>multi-processor / multi-core network computer in my basement. In no
>>time at all, I've been able to hack up some primitives for doing
>>parallel processing across this network in SBCL.
>>
>>My question is: can someone provide some names of parallel lisp
>>environments? I know of *Lisp. Anything else? I've been trying to
>>scare up a copy of the *Lisp Dictionary.
>>
>>Ultimately, I'm looking for parallel Lisp idioms to implement on top
>>of my primitives.
>>
>>Right now, I have a macro plet which has the form:
>>
>>(plet ((var1 (foo1))
>>        (var2 (foo2))
>>         ...)
>>    (exit-form)
>>  (form1)
>>  (form2)
>>   ...)
>>
>>The idea is that the variables get initialized (as per an ordinary
>>let), each form in the body gets executed in parallel, and exit-form
>>is a chunk of code that executes after all parallel threads have ended
>>which ties everything together.  This macro expands into my primitives
>>to initialize / access variables via my centralized tracker and a loop
>>which patiently waits for the threads to complete. Works well enough.
>>
>>I also have a parallel MAP which splits a map up across multiple
>>cpu's / cores.
>>
>>Implementing this is what made me come looking for guidance because I
>>can think of several ways to split up a MAP that might make sense
>>under different circumstances.
>>
>>Any ideas or pointers area welcome.


-- 
"Opportunity is missed by most people because it is dressed in overalls and
 looks like work."                                      -- Thomas A. Edison
From: anon
Subject: Re: Parallel Lisp
Date: 
Message-ID: <qXmFi.523500$p47.49236@bgtnsc04-news.ops.worldnet.att.net>
The person was asking about parallel language for LISP.

Paradigm: concurrent 

Ada ----- Since the closing of the DOD mandate it has falling out 
          of favor, even though it still being use. GNAT is the 
          main suppiler and free, but it uses GCC and is under 
          GNU License. Current Version of Ada is 2005, but some
          compiler are still using Ada 95 spec. Also, GNAT only uses 
          one processor in their current version. 

Occam -- It is also falling out of favor. The parallel designed 
         structure is no longer maintain, since 1996.


Paradigm: parallel

ACL2 --  Lisp type of parallel that uses SBCL and OpenMCL. But it 
         stll is not true parallel.


Sisal -- it was designed by the people of LLNL for a SMP 
         type of systems. I even experiment with it on a 8 
         cpu system design at LLNL. 

Other compilers such as Parallel Lisp, C, FORTRAN, and COBOL
still out of price for the average user. And they are not rated for 
the new dual processors from Intel or AMD.

There are some projects that, but most are using PVM,  MCP, 
or Beowulf design using multi system or boards using Pentium or 
AMD. But the processors like the old Pentium D, or new Core 2 
(Dual/Quad) have not been around that long.


Beowulf -- Beowulf.org  Design witch deals with clusters, type of 
           parallel.


Paradigm: ??? 

SBCL  -- Not true parallel. Some projects have used SBCL in their 
         parallel design. Such as ACL2.


In <··················@mordor.angband.thangorodrim.de>, Alexander Schreiber <···@usenet.thangorodrim.de> writes:
>anon <····@anon.org> wrote:
>> Sometime between 1995 and 2005 (last time I checked) all parallel and 
>> concurrent languages seam to have disappeared. Now, that the average 
>> person can buy or build a multi-processor system, we needs these 
>> languages but for some reason they are no where to be found. 
>
>Hmm, Occam is still alive, with at least two freely available compilers:
> - KROC (Kent Retargetable Occam Compiler):
>   http://www.cs.kent.ac.uk/projects/ofa/kroc/
> - The Amsterdam Compiler Kit also appears to contain an Occam compiler
>   http://tack.sourceforge.net/
>
>Occam was designed as the native programming language for transputer
>systems (inherently parallel to massive parallel architecture).
>
>There is also ADA, which has multiprocessing as part of the language
>spec, with GNAT as a free compiler environment:
>http://www.gnu.org/software/gnat/gnat.html
>
>One can also do parallel programming in Lisp, preferable one that maps Lisp
>threads to operating system threads. As far as I'm aware, the only
>freely available Lisp for x86 that does this is SBCL - where it works
>nicely as the previous poster already wrote.
>
>> I know of only one parallel language that is non-lisp called "Sisal". It 
>> is still on the net but is no longer supported by LLNL. It automatic 
>> creates parallels code for the user.
>
>I am rather wary of such claims - typical code not explicitly designed for
>parallel execution tends not to win much by automagic parallelisation.
>
>> Just found an older parallel version of *lisp  or StarLisp simulator which 
>> may be downloaded at bottom of the following archive URL page but it 
>> was created back in 1989.  	
>>      File Name = Starsim-f20.zip   File size = 580 KB
>>
>>   http://examples.franz.com/index.html
>>
>>   "*Lisp (pronounced "star Lisp") is a data parallel extension of the 
>> Common Lisp programming language, designed for the Connection 
>> Machine system." 
>>
>>
>>
>> In <························@k79g2000hse.googlegroups.com>,  "Mike G." <···············@gmail.com> writes:
>>>I've spent a little free time over the last two months building a
>>>multi-processor / multi-core network computer in my basement. In no
>>>time at all, I've been able to hack up some primitives for doing
>>>parallel processing across this network in SBCL.
>>>
>>>My question is: can someone provide some names of parallel lisp
>>>environments? I know of *Lisp. Anything else? I've been trying to
>>>scare up a copy of the *Lisp Dictionary.
>>>
>>>Ultimately, I'm looking for parallel Lisp idioms to implement on top
>>>of my primitives.
>>>
>>>Right now, I have a macro plet which has the form:
>>>
>>>(plet ((var1 (foo1))
>>>        (var2 (foo2))
>>>         ...)
>>>    (exit-form)
>>>  (form1)
>>>  (form2)
>>>   ...)
>>>
>>>The idea is that the variables get initialized (as per an ordinary
>>>let), each form in the body gets executed in parallel, and exit-form
>>>is a chunk of code that executes after all parallel threads have ended
>>>which ties everything together.  This macro expands into my primitives
>>>to initialize / access variables via my centralized tracker and a loop
>>>which patiently waits for the threads to complete. Works well enough.
>>>
>>>I also have a parallel MAP which splits a map up across multiple
>>>cpu's / cores.
>>>
>>>Implementing this is what made me come looking for guidance because I
>>>can think of several ways to split up a MAP that might make sense
>>>under different circumstances.
>>>
>>>Any ideas or pointers area welcome.
>
>
>-- 
>"Opportunity is missed by most people because it is dressed in overalls and
> looks like work."                                      -- Thomas A. Edison
From: anon
Subject: Re: Parallel Lisp -- Microsoft says:
Date: 
Message-ID: <aEwFi.86268$ax1.82146@bgtnsc05-news.ops.worldnet.att.net>
Just found this on the net: 
The article is something like Microsoft saying we would neven need 
more than 640K for Ram size. But it does give other information.


   Microsoft sees shift to parallel in 10yrs
   Posted: 03 Sep 2007


   Multicore processors are driving a historic shift to a new parallel 
   architecture for mainstream computers. But a parallel programming 
   model to serve those machines will not emerge for five to 10 years, 
   according to experts from Microsoft Corp.

   ... 

by  Rick Merritt
EE Times

The complete artile can be found at: 
http://www.eetasia.com/ART_8800478019_480100_NT_692e7452.HTM
From: Mike G.
Subject: Re: Parallel Lisp -- Microsoft says:
Date: 
Message-ID: <1189614703.140874.284250@d55g2000hsg.googlegroups.com>
On Sep 11, 9:22 am, ····@anon.org (anon) wrote:
> Just found this on the net:
> The article is something like Microsoft saying we would neven need
> more than 640K for Ram size. But it does give other information.

This is kinda funny, mostly because some serious multi-core machines
are available today. And with AMD entering the race with a quad
Opteron, and 8-core systems from Intel on the immediate horizon, I
expect to see prices for 4/8-way systems fall to a reasonable range
within a year or two. Now, granted, my definition of "reasonable" may
be slightly higher than some people's, but still.. they are coming,
and they'll be here before the decade mark for sure.

On an altogether different note, I've put some new constructs into my
parallel programming environment this morning for data-driven
parallelism. Each core/node recieves data on a socket, applies a
transform to it, and pumps it out a socket to another core/node. Its
very limited right now -- I have to expand it to allow multiple
inputs / outputs, for instance. But the basic method seems OK - I got
pretty good results doing a simple FFT / filter / iFFT chain. Not as
fast as an analog computer though :)
From: anon
Subject: Re: Parallel Lisp -- Microsoft says:
Date: 
Message-ID: <7v%Fi.90642$ax1.37253@bgtnsc05-news.ops.worldnet.att.net>
That's the reason I posted it.

Everyone one know there are the groups that are into supercomputers. But 
what Microsoft forgot is the two new groups on the block.

First, even though it been around a while is the Internet servers. They 
need all the power they can get. But what they really need is "Internet 2" 
for bandwidth.

But the newest person is the Gamer.  They want more and more power to 
play those games. And they are willing to spend the money for that power. 
Intel and AMD knows this and are will to comply with the Gamer. 

But Microsoft like always will play catch-up and try to dominate the field 
of software, again.


In <························@d55g2000hsg.googlegroups.com>,  "Mike G." <···············@gmail.com> writes:
>On Sep 11, 9:22 am, ····@anon.org (anon) wrote:
>> Just found this on the net:
>> The article is something like Microsoft saying we would neven need
>> more than 640K for Ram size. But it does give other information.
>
>This is kinda funny, mostly because some serious multi-core machines
>are available today. And with AMD entering the race with a quad
>Opteron, and 8-core systems from Intel on the immediate horizon, I
>expect to see prices for 4/8-way systems fall to a reasonable range
>within a year or two. Now, granted, my definition of "reasonable" may
>be slightly higher than some people's, but still.. they are coming,
>and they'll be here before the decade mark for sure.
>
>On an altogether different note, I've put some new constructs into my
>parallel programming environment this morning for data-driven
>parallelism. Each core/node recieves data on a socket, applies a
>transform to it, and pumps it out a socket to another core/node. Its
>very limited right now -- I have to expand it to allow multiple
>inputs / outputs, for instance. But the basic method seems OK - I got
>pretty good results doing a simple FFT / filter / iFFT chain. Not as
>fast as an analog computer though :)
>
From: Rainer Joswig
Subject: Re: Parallel Lisp
Date: 
Message-ID: <joswig-203289.16371405092007@news-europe.giganews.com>
In article <························@k79g2000hse.googlegroups.com>,
 "Mike G." <···············@gmail.com> wrote:

> I've spent a little free time over the last two months building a
> multi-processor / multi-core network computer in my basement. In no
> time at all, I've been able to hack up some primitives for doing
> parallel processing across this network in SBCL.
> 
> My question is: can someone provide some names of parallel lisp
> environments? I know of *Lisp. Anything else? I've been trying to
> scare up a copy of the *Lisp Dictionary.

QLisp (http://www.dreamsongs.com/Qlisp.html)
Multilisp
Paralation Lisp (there is a book about that, The Paralation Model)
Connection Machine Lisp


> 
> Ultimately, I'm looking for parallel Lisp idioms to implement on top
> of my primitives.
> 
> Right now, I have a macro plet which has the form:
> 
> (plet ((var1 (foo1))
>         (var2 (foo2))
>          ...)
>     (exit-form)
>   (form1)
>   (form2)
>    ...)
> 
> The idea is that the variables get initialized (as per an ordinary
> let), each form in the body gets executed in parallel, and exit-form
> is a chunk of code that executes after all parallel threads have ended
> which ties everything together.  This macro expands into my primitives
> to initialize / access variables via my centralized tracker and a loop
> which patiently waits for the threads to complete. Works well enough.
> 
> I also have a parallel MAP which splits a map up across multiple
> cpu's / cores.
> 
> Implementing this is what made me come looking for guidance because I
> can think of several ways to split up a MAP that might make sense
> under different circumstances.
> 
> Any ideas or pointers area welcome.
> -M

-- 
http://lispm.dyndns.org
From: Mike G.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189004677.760007.315550@g4g2000hsf.googlegroups.com>
On Sep 5, 10:37 am, Rainer Joswig <······@lisp.de> wrote:
>
> QLisp (http://www.dreamsongs.com/Qlisp.html)
> Multilisp
> Paralation Lisp (there is a book about that, The Paralation Model)
> Connection Machine Lisp
>

Thank you! That first link is making for some very interesting
reading. Parallel Lisp constructs right from John McCarthy.
Awesome.
From: Mark H.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189116170.888169.172220@y42g2000hsy.googlegroups.com>
On Sep 5, 7:23 am, "Mike G." <···············@gmail.com> wrote:
> I've spent a little free time over the last two months building a
> multi-processor / multi-core network computer in my basement. In no
> time at all, I've been able to hack up some primitives for doing
> parallel processing across this network in SBCL.

I'd love to hear about the details -- are you using MPI for the back-
end?

mfh
From: Mike G.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189169655.137091.299580@d55g2000hsg.googlegroups.com>
On Sep 6, 6:02 pm, "Mark H." <············@gmail.com> wrote:
> On Sep 5, 7:23 am, "Mike G." <···············@gmail.com> wrote:
>
> > I've spent a little free time over the last two months building a
> > multi-processor / multi-core network computer in my basement. In no
> > time at all, I've been able to hack up some primitives for doing
> > parallel processing across this network in SBCL.
>
> I'd love to hear about the details -- are you using MPI for the back-
> end?
>
> mfh

No, I'm not using MPI just yet -- but that is an evolution I might
take.
I used MPI for awhile back in university and thought it was great,
but
I'm not sure how much of a win it would be for Lisp. Any thoughts?

In any event, my core primitives are quite simple and it would be
easy
to abstract the data/code-passing out to MPI-bindings in the future.
At least, from what I remember of MPI it will be easy, any way :)

Right now I'm just using raw sockets in a peer-to-peer fashion. Peers
are assumed to have identical Lisp worlds. They query a tracker
(possibly
one of many) to find other free peers who can do some work for them.
There is a simple RPC-like mechanism, which I haven't exploited much
yet - generally I just send out S-expressions to be EVALed.

The trackers also maintain parallel variables (stored in hash
tables),
with simple spin-locks to keep everything tip-top.

On top of primitives to eval code and set/fetch parallel variables,
I have implemented pmap and plet* and a few other parallel analogs of
ordinary Common Lisp. They work quite well, and give a reasonable
speed
up given the circumstances -- i'm going to work in some logic to
determine
what sorts of expressions can be invoked by my RPC mechanism instead
of
invoking the full EVAL. I doubt it will do much good, but its a worthy
compile-time optimization.

Its all quite crude, to be sure, but it is still REALLY fun to be able
to run paralleled stuff from the REPL.

For kicks, I think I'm going to hook up some LEDs to the parallel
ports
of these boards. As each thread goes active I'll have it turn on a
light
and when it idles it will turn it off. Might actually help with some
debugging too :)
From: ··············@gmail.com
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189407790.741815.236840@y42g2000hsy.googlegroups.com>
On Sep 5, 4:23 pm, "Mike G." <···············@gmail.com> wrote:

> The idea is that the variables get initialized (as per an ordinary
> let), each form in the body gets executed in parallel, and exit-form
> is a chunk of code that executes after all parallel threads have ended
> which ties everything together.  This macro expands into my primitives
> to initialize / access variables via my centralized tracker and a loop
> which patiently waits for the threads to complete. Works well enough.
>
> I also have a parallel MAP which splits a map up across multiple
> cpu's / cores.
>
> Implementing this is what made me come looking for guidance because I
> can think of several ways to split up a MAP that might make sense
> under different circumstances.
>

I am afraid, on average, these schemes will not typically buy much
especially when the threads are unbalanced in terms of time they take
to finish. Parallel processing is gaining interest quickly again
because we seem to be forced to go this route to keep up with Moore's
law. In some recent benchmarks some clever Sony programmers did not
quite manage to get a significant benefit from the 8 Cell processors
of the SonyPlaystation 3 because of the communication +
synchronization overhead almost completely absorbed the parallel
advantage. At the time of the Connection machine we failed to gain
much advantage of the 64k CPU for similar reasons.

In the end you need a higher level of representation than individual
map statements or parallel assignments. You need computational models
that are parallel at a conceptual level such as spreadsheets. For
instance, you could analyze complex spreadsheets. The dataflow can be
nicely parallelized and mapped onto your cluster and executed as
functions.

Personally, I would be interested to see Antiobjects mapped onto your
cluster. Take for instance a Collaborative Diffusion of a 2D or 3D AI
problem. Give the user some tools to define the entire space but let
your cluster split that space up into chunks that you distribute onto
your cores/cpus. With fairly minimal communication overhead you can
solve hard AI problems with nearly perfect load balancing yet minimal
if any cognitive overhead to the programmer. That would be exciting.


Repenning, A., Collaborative Diffusion: Programming Antiobjects. in
OOPSLA 2006, ACM SIGPLAN International Conference on Object-Oriented
Programming Systems, Languages, and Applications, (Portland, Oregon,
2006), ACM Press.


http://www.cs.colorado.edu/~ralex/papers/PDF/OOPSLA06antiobjects.pdf
From: Mike G.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189421537.623416.84030@y42g2000hsy.googlegroups.com>
On Sep 10, 3:03 am, ··············@gmail.com wrote:
> On Sep 5, 4:23 pm, "Mike G." <···············@gmail.com> wrote:
>
> > The idea is that the variables get initialized (as per an ordinary
> > let), each form in the body gets executed in parallel, and exit-form
> > is a chunk of code that executes after all parallel threads have ended
> > which ties everything together.  This macro expands into my primitives
> > to initialize / access variables via my centralized tracker and a loop
> > which patiently waits for the threads to complete. Works well enough.
>
> > I also have a parallel MAP which splits a map up across multiple
> > cpu's / cores.
>
> > Implementing this is what made me come looking for guidance because I
> > can think of several ways to split up a MAP that might make sense
> > under different circumstances.
>
> I am afraid, on average, these schemes will not typically buy much
> especially when the threads are unbalanced in terms of time they take
> to finish.

Well, yes and no. If the thread itself is not parallelized, then
you're
certainly correct. But in a situation where initial threads take more
time
than later threads, some of the unbalance can be corrected if initial
threads are parallelized - my system is peer-to-peer, so as peers
finish
up their work, they are available to take some more load from the
initial
thread. Provided, of course, that the threads are written in such a
way
as to make that possible.

> In some recent benchmarks some clever Sony programmers did not
> quite manage to get a significant benefit from the 8 Cell processors
> of the SonyPlaystation 3 because of the communication +
> synchronization overhead almost completely absorbed the parallel
> advantage. At the time of the Connection machine we failed to gain
> much advantage of the 64k CPU for similar reasons.

I had contemplated using PS3's instead of my bare-bones Xeons. It
costs
me more money per core this way (about twice as much, in fact) but I
can
easily add all the RAM I like.

> In the end you need a higher level of representation than individual
> map statements or parallel assignments. You need computational models
> that are parallel at a conceptual level such as spreadsheets. For
> instance, you could analyze complex spreadsheets. The dataflow can be
> nicely parallelized and mapped onto your cluster and executed as
> functions.

Well, yes. Much more is needed than some simple macros, of course.
After I work through some more reading from the QLisp and *Lisp
materials
I have, I'm going to start looking at some things like this. Not
necessarily
spreadsheet or matrix related stuff.. I'll probably do something for
polynomials.

>
> Personally, I would be interested to see Antiobjects mapped onto your
> cluster. Take for instance a Collaborative Diffusion of a 2D or 3D AI
> problem. Give the user some tools to define the entire space but let
> your cluster split that space up into chunks that you distribute onto
> your cores/cpus. With fairly minimal communication overhead you can
> solve hard AI problems with nearly perfect load balancing yet minimal
> if any cognitive overhead to the programmer. That would be exciting.
>
> Repenning, A., Collaborative Diffusion: Programming Antiobjects. in
> OOPSLA 2006, ACM SIGPLAN International Conference on Object-Oriented
> Programming Systems, Languages, and Applications, (Portland, Oregon,
> 2006), ACM Press.
>
> http://www.cs.colorado.edu/~ralex/papers/PDF/OOPSLA06antiobjects.pdf

:) Thanks for the link. I'll let you know if I end up doing anything
related to this.
From: ··············@gmail.com
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189502007.304240.19770@r34g2000hsd.googlegroups.com>
On Sep 10, 12:52 pm, "Mike G." <···············@gmail.com> wrote:
> On Sep 10, 3:03 am, ··············@gmail.com wrote:
>
>
>
> > On Sep 5, 4:23 pm, "Mike G." <···············@gmail.com> wrote:
>
> I had contemplated using PS3's instead of my bare-bones Xeons. It
> costs
> me more money per core this way (about twice as much, in fact) but I
> can
> easily add all the RAM I like.

One discouraging, in terms of parallel computation, performance issue
is that of the 2 Tflops the PS3 makes 0.2 are by the 8 cells and 1.8
by the SINGLE GPU.

>
>
> > Repenning, A., Collaborative Diffusion: Programming Antiobjects. in
> > OOPSLA 2006, ACM SIGPLAN International Conference on Object-Oriented
> > Programming Systems, Languages, and Applications, (Portland, Oregon,
> > 2006), ACM Press.
>
> >http://www.cs.colorado.edu/~ralex/papers/PDF/OOPSLA06antiobjects.pdf
>
> :) Thanks for the link. I'll let you know if I end up doing anything
> related to this.

ps. forgot to say what is probably most interesting to this group. The
current implementation, including visualization, of Antiobjects is all
done with Lisp.
From: Mike G.
Subject: Re: Parallel Lisp
Date: 
Message-ID: <1189517270.248264.6420@r29g2000hsg.googlegroups.com>
On Sep 11, 5:13 am, ··············@gmail.com wrote:

> One discouraging, in terms of parallel computation, performance issue
> is that of the 2 Tflops the PS3 makes 0.2 are by the 8 cells and 1.8
> by the SINGLE GPU.

Actually, I think the PS3 has only 7 cells (to increase usable yield
out
of the fab). But yes.. requiring access to the GPU stinks - mostly
because
AFAIK, we have no open specs for it. :(

> ps. forgot to say what is probably most interesting to this group. The
> current implementation, including visualization, of Antiobjects is all
> done with Lisp.

Is the source available?
From: Sacha
Subject: Re: Parallel Lisp
Date: 
Message-ID: <b49Fi.98026$Q35.4355294@phobos.telenet-ops.be>
··············@gmail.com wrote:

> Repenning, A., Collaborative Diffusion: Programming Antiobjects. in
> OOPSLA 2006, ACM SIGPLAN International Conference on Object-Oriented
> Programming Systems, Languages, and Applications, (Portland, Oregon,
> 2006), ACM Press.
> 
> 
> http://www.cs.colorado.edu/~ralex/papers/PDF/OOPSLA06antiobjects.pdf
> 

Very interesting, thanks a lot for sharing this.

Sacha