From: Robert Maas, http://tinyurl.com/uh3t
Subject: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008apr25-003@yahoo.com>
Does anybody you know of have access to a computer with at least 64
CPUs, with a version of Common Lisp that runs on that computer and
supports distributing the function-applications within a MAPCAR
call across as many CPUs as are available in order to achieve great
speed-up compared to the usual algorithm of performing each
function-application in sequence down the list? Would anybody
volunteer such a system for me to occasionally use across the net
without charge, for research purposes?

Of course if there are dependencies from one function application
to the next, this parallel-mapcar wouldn't be appropriate. But I
have an application where I need to apply a single function to a
large number of arguments in parallel. I'm running it with nearly
three hundred at the moment, whereupon it takes several minutes to
do them all in succession, which isn't too bad if done rarely, but
I envision doing the same with thousands of arguments, whereby the
time to do them in succession would be prohibitive.

From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <5773778f-c869-4f54-83b4-7d6a7221e4f3@m73g2000hsh.googlegroups.com>
On Apr 26, 2:21 am, ·················@SpamGourmet.Com (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> Does anybody you know of have access to a computer with at least 64
> CPUs, with a version of Common Lisp that runs on that computer and
> supports distributing the function-applications within a MAPCAR
> call across as many CPUs as are available in order to achieve great
> speed-up compared to the usual algorithm of performing each
> function-application in sequence down the list? Would anybody
> volunteer such a system for me to occasionally use across the net
> without charge, for research purposes?

Those systems typically don't exist any more, because the memory
coherency & latency costs are too high for distributing stuff in such
a fine-grained way.  There are plenty of machines with lots of cores -
the systems I deal with mostly have up to 144, though we very seldom
use them in a single domain (and no, they can't be lent out).  But
these systems typically make use of significantly coarser-grained
multiprocessing, so that you're not just killed by communication cost
all the time.

One interesting possibility though is heavily multicored/multithreaded
processors such as Sun's Niagara family.  I don't know what the
latency issues are between cores for these things, but between HW
threads (which look like virtual CPUs) it is essentially zero I
think.  I think the current largest of these systems is currently 16
core / 128 thread.

I doubt there are CL implementations which take advantage of these
systems however.
From: Patrick May
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <m2prsb8ahc.fsf@spe.com>
Tim Bradshaw <··········@tfeb.org> writes:
> On Apr 26, 2:21 am, ·················@SpamGourmet.Com (Robert Maas,
> http://tinyurl.com/uh3t) wrote:
>> Does anybody you know of have access to a computer with at least 64
>> CPUs, with a version of Common Lisp that runs on that computer and
>> supports distributing the function-applications within a MAPCAR
>> call across as many CPUs as are available in order to achieve great
>> speed-up compared to the usual algorithm of performing each
>> function-application in sequence down the list? Would anybody
>> volunteer such a system for me to occasionally use across the net
>> without charge, for research purposes?
>
> Those systems typically don't exist any more, because the memory
> coherency & latency costs are too high for distributing stuff in
> such a fine-grained way.

     Don't do that, then.  ;-)

     Space Based Architecture is one technique for avoiding the
distribution overhead.  The basic idea is to co-locate the logic and
data for a particular instance of an end-to-end use case into a single
processing unit that also provides a messaging infrastructure.  This
allows near linear scalability across a broad range of hardware
resources.  (Full disclosure:  I do this for my day job.)

[ . . . ]
> I doubt there are CL implementations which take advantage of these
> systems however.

     That's too bad.  I'd much rather implement this architecture in
Lisp than in Java.  How good is the SMP support in commercial Lisps?

Regards,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          ···@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <1d077f59-cccd-4237-a163-76ee0162fd06@m73g2000hsh.googlegroups.com>
On Apr 27, 6:37 pm, Patrick May <····@spe.com> wrote:
> such a fine-grained way.
>
>      Don't do that, then.  ;-)

That was pretty much my suggestion.
From: ··········@gmail.com
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <4db57361-e6fd-4b0e-9d73-7e8152faf7e4@c65g2000hsa.googlegroups.com>
You might want to take a glance at Azul, who are making multiprocessor
add-on boxes to run Java.  It's (at least a little bit) along the
lines
of the plug-in-board Lisp machines that Symbolics made and the bigger
ones they once contemplated.  It'll be interesting to see how well
they sell.
From: Alex Mizrahi
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <4814c770$0$90265$14726298@news.sunsite.dk>
 TB> I doubt there are CL implementations which take advantage of these
 TB> systems however.

1. i think what Robert does can be just launched in multiple processes, even 
on different machines.
    Robert is just too lazy to implement this, so he want's support in core 
language.
    (also he could get a speedup from just faster CPU.
     probably some modern 4-core Xeon would be not much worse than a rusty
     64-CPU machine or that Niagara monster)

2. i doubt there's huge need in parallelism support in core language.
    most likely parallel:mapcar implemented in library will do just fine.

3. ABCL should work fine on niagaras, since it uses underlying JVM threading

4. SBCL and SCL might work too 
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008may01-002@yahoo.com>
> From: "Alex Mizrahi" <········@users.sourceforge.net>
> i think what Robert does can be just launched in multiple
> processes, even on different machines.

Yes, but I was wondering if any existing Lisp implementation
provided a way specify a parallel MAPCAR which would then be
automatically distributed per some configuration that had been set
up previously. For example, a cooperative set of CPUs might be in
an arrangement whereby whenever one of them wants to run a burst of
parallel computation and the others have some idle time then
FORK-MAPCAR automatically distributes across all the CPUs. Of
course for this to be practical they must all be physically located
close together so that the master algorithm can be quickly
distributed onto all the CPUs (or accessed in real time by shared
memory bus).

For launching on multiple machines connected across the InterNet,
like the way ····@home is done, I don't imagine FORK-MAPCAR being
practical. Instead, only a small set of algorithms would be
pre-distributed to the various hosts, and each such host registered
as now available to run that particular algorithm, and then when
there's a task to be done in parallel the parallel sub-tasks would
be sent out per that registry.

> Robert is just too lazy to implement this, so he want's support
> in core language.

That's an extremely hostile/nasty thing for you to say about
somebody you've never met in person. You have no idea how much work
I do, without pay, to accomplish various major processing tasks.

I was simply inquiring whether any *one* (1) person might have
access to a single multi-CPU with such auto-distributing software
up&running and available for occasional use by outsiders when not
otherwise busy. That seemed a more reasonable request than to ask
hundreds of different people to *each* volunteer their single-CPU
servers to join me in building a brand-new protocol for
distributing software for such tasks and maitaiing a registery of
which hosts have which software and which hosts are available at
any given time and distributing tasks via this new protocol we've
invented. But if you think asking for hundreds of volunteers to
work with me on a new protocol would be a more reasonable request,
so be it.

> (also he could get a speedup from just faster CPU. probably some
>  modern 4-core Xeon would be not much worse than a rusty 64-CPU
>  machine or that Niagara monster)

Well if *one* person has access to a CPU that is several orders of
magnitude faster than the Intel processor used by a commercial ISP,
and would be willing to let me use it from time to time to see how
much faster it is at the kinds of tasks I have in mind, I'd be
willing to give that a try too. But that really doesn't scale well,
compared to more CPUs running in parallel. If I want to tackle
processing a batch of a hundred million records at a time, I really
don't think any single CPU or even quad-CPU that fast exists at
all.

> i doubt there's huge need in parallelism support in core
> language. most likely parallel:mapcar implemented in library will
> do just fine.

My wording must have been unclear. parallel:mapcar in a library,
using some super-efficient built-in mechanism for forking on
64-core computer, would satisfy my query just fine. But
parallel:mapcar in a library without any support for efficient
threads in the underlying CL would probably not work well.

> 3. ABCL should work fine on niagaras, since it uses underlying
> JVM threading

What's the largest number of available CPUs ("cores") that anybody
in this newsgroup has available for playing around with this sort
of thing using ABCL?

4. SBCL and SCL might work too

Same question for SBCL and SCL.
From: Alex Mizrahi
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <481adf6c$0$90265$14726298@news.sunsite.dk>
 ??>> i think what Robert does can be just launched in multiple
 ??>> processes, even on different machines.

 RM> Yes, but I was wondering if any existing Lisp implementation
 RM> provided a way specify a parallel MAPCAR which would then be
 RM> automatically distributed per some configuration that had been set
 RM> up previously.

no, i mean different thing -- in many cases it's possible to parallelize 
processing with mere shell scripts, requiring no support from 
language/libraries itself.
from what you're saying, you task is like that -- it should be easy to write 
some shell scripts that launch processing on multiple processes/nodes.

you only need some fancy parallelization support if you have either:
 * lots of small (executed in less than, say, 0.1 sec) tasks that depend on 
each other: outputs of one function are fed to input of another
 * enormous amounts of input data that is not easy to share

 RM> For launching on multiple machines connected across the InterNet,
 RM> like the way ····@home is done, I don't imagine FORK-MAPCAR being
 RM> practical. Instead, only a small set of algorithms would be
 RM> pre-distributed to the various hosts, and each such host registered
 RM> as now available to run that particular algorithm, and then when
 RM> there's a task to be done in parallel the parallel sub-tasks would
 RM> be sent out per that registry.

between single huge SMP system and ····@home scale there are "cluster" 
solutions, and they are actually easiest to work with.

 ??>> Robert is just too lazy to implement this, so he want's support
 ??>> in core language.

 RM> That's an extremely hostile/nasty thing for you to say about
 RM> somebody you've never met in person.

no, it would be hostile to call you asshole (i have no problems calling 
people assholes on usenets either).
"lazy" is not even always a bad thing -- maybe you just have other stuff to 
do.

 RM>  You have no idea how much work I do, without pay, to accomplish
 RM> various major processing tasks.

sorry, but often "without pay" means "what nobody needs".

 RM> I was simply inquiring whether any *one* (1) person might have
 RM> access to a single multi-CPU with such auto-distributing software
 RM> up&running and available for occasional use by outsiders when not
 RM> otherwise busy. That seemed a more reasonable request than to ask
 RM> hundreds of different people to *each* volunteer their single-CPU
 RM> servers to join me in building a brand-new protocol for
 RM> distributing software for such tasks and maitaiing a registery of
 RM> which hosts have which software and which hosts are available at
 RM> any given time and distributing tasks via this new protocol we've
 RM> invented. But if you think asking for hundreds of volunteers to
 RM> work with me on a new protocol would be a more reasonable request,
 RM> so be it.

as i've mentioned above, you could also ask for a "cluster" -- just bunch of 
small servers connected via local network.
they are *much* easier to find than huge SMP machine -- for example, when i 
was a student in university, we had a class with 10 machines available to 
us, and sometimes we were launching distributed rendering tasks on them.
and local cluster is much easier to manage than distributed network in 
internet: for example, in cluster you can launch task on a remote node as 
easy as

  ssh remote-node sbcl --load process10.lisp > results10.txt

also, you can get readily available "grid computing" software, that is able 
to automatically find idle nodes and use them to launch processes.

but, you know, if you mention this "brand-new" protocol, this is actually 
interesting thing -- if you'll actually *implement* this and release as open 
source software, it would be quite an interesting thing to a Lisp community. 
and it would be much easier to convince people to contribute their resources 
to testing this distributed network.

 RM> Well if *one* person has access to a CPU that is several orders of
 RM> magnitude faster than the Intel processor used by a commercial ISP,

"Intel processor used by a commercial ISP" is hardly a benchmark. if they've 
bought it some 5 years ago,
i'm pretty sure current top Intel processors are more than 10 times faster.

 ??>> 3. ABCL should work fine on niagaras, since it uses underlying
 ??>> JVM threading

 RM> What's the largest number of available CPUs ("cores") that anybody
 RM> in this newsgroup has available for playing around with this sort
 RM> of thing using ABCL?
 RM> 4. SBCL and SCL might work too
 RM> Same question for SBCL and SCL.

i have 8-core machine, here, seems to be working fine with SBCL -- i was 
able to launch 8 threads that do processing, and indeed all 8 cores were 
utilized.
i'm pretty sure ABCL and SCL would work fine there too.

i'd be glad to give you access to this system, but i don't owned, and i 
don't think i'm authorized to share access to it, sorry 
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008may08-001@yahoo.com>
> From: "Alex Mizrahi" <········@users.sourceforge.net>
> in many cases it's possible to parallelize processing with mere
> shell scripts, requiring no support from language/libraries itself.
> from what you're saying, you task is like that -- it should be easy
> to write some shell scripts that launch processing on multiple
> processes/nodes.

That would seem to require either re-starting Lisp multiple times
and re-loading the application-level software on each such clone
process, or somehow saving an executable Lisp that already has all
the software loaded re-starting that saved executable multiple
times. Question: If the latter option is possible, then can this be
done on an ordinary operating system in such a way that all the
clone processes share the same memory until and unless one of them
breaks out of pure-page sharing by modifying one of the pure pages
making it impure for just that one process, after which all the
other processes share that page (until each of them in turn also
modifies it) but still *all* processes share the rest of the pages
which nobody has made impure yet? I believe this was possible on
ITS, but I don't know if it works that way on Unix or Linux. If all
that is possible, then indeed that might be a feasible way to
parallelize my algorithms on a computer that had tens or hundreds
of CPU cores.

On the other hand, perhaps a computer with so many CPU cores might
use some non-standard operating system in order to better manage
all the parallel-processing that is possible. So I guess my
original query about anyone having access to such a system is more
general than I originally imagined.

> you only need some fancy parallelization support if you have either:
>  * lots of small (executed in less than, say, 0.1 sec) tasks that
> depend on each other: outputs of one function are fed to input of
> another
>  * enormous amounts of input data that is not easy to share

For my application, it typically takes a fraction of a second to a
few seconds per single ProxHash calculation, with all the Lisp code
interpreted (actually JIT-compiled in CMUCL). So if I had tens of
thousands of ProxHash calculations to perform (conceptually all in
parallel), and a computer with 64 or more CPU cores, would a shell
script be able to farm them off to new processes at a average rate
of 1/N second per new process where N is the number of CPU cores,
in order to keep all the CPU cores constatly busy, in order to
finish the ProxHash calculations in minimum real time?

> between single huge SMP system and ····@home scale there are "cluster"
> solutions, and they are actually easiest to work with.

So that would be something like 16 or more quad-core computers
connected on a very fast localnet, so that ProxHash calculations
could be farmed off to localnet-remote computers rapidly enough to
keep all four of their each-cores busy?

> but often "without pay" means "what nobody needs".

More likely what I don't know how to advertise without spamming so
nobody knows what's available because I'm not willing to spam just
to find beta testers. There might be lots of people who need the
kind of software I develop, but they have no practical way to know
that I've developed anything like that because I don't have the
money to place ads on prime-time television, so they don't even
think of the kinds of solutions I invent, so they don't realize
what they're missing.

> "Intel processor used by a commercial ISP" is hardly a benchmark.
> if they've bought it some 5 years ago, i'm pretty sure current top
> Intel processors are more than 10 times faster.

I am not privy to such knowledge regarding the commercial ISP where
I have my shell account. From time to time I see system messages
about upgrading to a new CPU or replacing a CPU that has been
flaky, but I don't have total recall memory so I don't remember the
details. A few weeks ago the shell machine hung and was down for
1.5 hours, and when it came back up there was a system message
saying there'd be a new CPU soon. But now the message is gone and
there's no announcement of a new CPU so I don't know what happened.

Do you know of any good benchmark that a single user can run on his
personal account on a commercial ISP that has hundreds of users,
some time when it's not too busy? For that matter, do you know any
easy way for a single user to determine how busy the computer is at
the moment? Right now, finger shows (most columns removed, rows sorted):
  Idle  Login 
        Wed   
        Wed   
        Wed   
        Wed   
  1:30  Wed   
  2:02  Wed   
  2:21  Wed   
  6:42  Wed   
 14:41  Sun   
 22:42  Mon   
    1d  Tue   
    3d  Sun   
    3d  Sun   
From looking at that, can you estimate how busy the system is?
Otherwise, is there some other program that will tell me?
Tops-20 at Stanford had a command 'load' which gave that info,
and Tenex had control-T, at least that's how I vaguely remember it.
That was so long ago I am not at all sure, and Unix has neither,
so it's irrelevant, except to give you a general idea what info I need
before running any sort of CPU timing test. Not just because results
would be inaccurate if the system is heavily loaded, but I'm not
allowed to use so much CPU time that it interferes with other users.
From: Madhu
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <m3lk2kud44.fsf@meer.net>
* (Robert Maas, http://tinyurl.com/uh3t) <·················@yahoo.com> :
Wrote on Thu, 08 May 2008 00:48:01 -0700:

| That would seem to require either re-starting Lisp multiple times and
| re-loading the application-level software on each such clone process,
| or somehow saving an executable Lisp that already has all the software
| loaded re-starting that saved executable multiple times.

This is the premise behind the

<URL:http://franz.com/support/tech_corner/forksmp.lisp>
<URL:http://franz.com/support/tech_corner/forksmp.lhtml>

Which attempts to use multiple processes over UNIX process granularity,
[search google for "cmucl forksmp" if you need an initial cmucl version
of this]

| Question: If the latter option is possible, then can this be done on
| an ordinary operating system in such a way that all the clone
| processes share the same memory until and unless one of them breaks
| out of pure-page sharing by modifying one of the pure pages making it
| impure for just that one process, after which all the other processes
| share that page [...] I believe this was possible on ITS, but I don't
| know if it works that way on Unix or Linux.

"modern" unix VMs support mmap(2) calls with MAP_PRIVATE, which does
copy on write.  CMUCL, for example uses this in mapping the corefile, so
you get the behaviour for free

| If all that is possible, then indeed that might be a feasible way to
| parallelize my algorithms on a computer that had tens or hundreds of
| CPU cores.

I suspect such innovation is not encouraged in the current scenario.
Instead you are adviced to use extant thread APIs in partitioning your
tasks, and leave the memory management and maping of
threads-to-processors to the OS thread scheduler.  The OS/CPU vendors
are expected to take care of caching and other shennanigans.

--
Madhu
From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <1f431510-a8bb-4c8c-8b43-8b3afb511402@b1g2000hsg.googlegroups.com>
On May 8, 8:48 am, ···················@SpamGourmet.Com (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> Question: If the latter option is possible, then can this be
> done on an ordinary operating system in such a way that all the
> clone processes share the same memory until and unless one of them
> breaks out of pure-page sharing by modifying one of the pure pages
> making it impure for just that one process, after which all the
> other processes share that page (until each of them in turn also
> modifies it) but still *all* processes share the rest of the pages
> which nobody has made impure yet? I believe this was possible on
> ITS, but I don't know if it works that way on Unix or Linux. If all
> that is possible, then indeed that might be a feasible way to
> parallelize my algorithms on a computer that had tens or hundreds
> of CPU cores.

That typically is the way modern Unix(-oid) systems work.
(Disclaimer: I'm only deeply familiar with Solaris.)  However on any
significant modern machine things are quite a lot more complex because
access to memory is not simple. So (for instance) almost any system
will, in fact, aggressively make copies of pages it is using in the
caches of the core concerned.  Even for main memory, I think most
large SMP systems are now either explicitly or implicitly NUMA,
meaning you have to be concerned about which cores pages are near, and
you may well want to move pages around within main memory because of
this.  Solaris does the latter, for instance.

>
> On the other hand, perhaps a computer with so many CPU cores might
> use some non-standard operating system in order to better manage
> all the parallel-processing that is possible. So I guess my
> original query about anyone having access to such a system is more
> general than I originally imagined.

The break is almost certainly between systems with shared address
spaces (probably with hardware-supported coherency), where Unix does
very well, and scales to quite large numbers of cores (hundreds), and
systems which don't, where you probably effectively have multiple OS
instances.  Any sufficiently large system is of the latter kind, but
CC-shared-address systems can get quite big.  The edge case is shared-
address-space non-coherent systems: I don't know if there are many of
those (I think some of the 90s Cray boxes like the T3d and T3e were
like that, but I am not sure).

--tim
From: Pascal J. Bourguignon
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <7cwsly2wpo.fsf@pbourguignon.anevia.com>
···················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:
> Do you know of any good benchmark that a single user can run on his
> personal account on a commercial ISP that has hundreds of users,
> some time when it's not too busy? 

On a linux system you can have a look at /proc/cpuinfo.

cat /proc/cpuinfo

Otherwise, just run your own benchmark on  your various systems.  Mine
is compilation of emacs:

cd emacs-22.1 ; time (./configure && make)


> For that matter, do you know any
> easy way for a single user to determine how busy the computer is at
> the moment? Right now, finger shows (most columns removed, rows sorted):

Finger is not the right too.  To know the load of a system, the right
tool is obviously:

     loadavg

well, it may not exist (anymore) in which case, again obviously:

     uptime

would give you the load average such as:

 14:44:30 up  4:39,  2 users,  load average: 0.05, 0.04, 0.13


> That was so long ago I am not at all sure, and Unix has neither,
> so it's irrelevant, except to give you a general idea what info I need
> before running any sort of CPU timing test. Not just because results
> would be inaccurate if the system is heavily loaded, but I'm not
> allowed to use so much CPU time that it interferes with other users.

But like always in unix, you don't usually test whether resources
exist before using them, because by the time you try to use them, they
may already be exhausted, given that it's a multiuser system without
resource reservation.  What you just do, is to use the resources you
need.  And if you want to be nice to the other users, then just that,
be nice, using the nice(1) command:

   nice  my-big-cpu-very-intensive-program

This will make your my-big-cpu-very-intensive-program program use all
the CPU time it can, but only when there is no other process competing
for that resource.

Another tool you may use is batch(1), which enqueues jobs, and will
start them only when the load average is low enough.  That is, if it's
installed on your system.  If you had a lot of little tasks to
execute, you could use batch.


-- 
__Pascal Bourguignon__
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008may15-001@yahoo.com>
> From: ····@informatimago.com (Pascal J. Bourguignon)
> On a linux system you can have a look at /proc/cpuinfo.

That does't help me on my shell account running FreeBSD Unix.

> cat /proc/cpuinfo

% more /proc/cpuinfo
/proc/cpuinfo: No such file or directory

> To know the load of a system, the right tool is obviously:
>     loadavg

% loadavg
loadavg: Command not found.

> well, it may not exist (anymore) in which case, again obviously:
>     uptime

% uptime
 5:01AM  up 3 days, 25 mins, 9 users, load averages: 5.11, 6.07, 6.49
(Oh oh, there's that cliche about third time ... charm?)
Anyway, with load around 5 or 6 it looks too busy for me to run any
major CPU-speed benchmark.

> would give you the load average such as:
> 14:44:30 up  4:39,  2 users,  load average: 0.05, 0.04, 0.13

I don't know what machine gave you *those* numbers, but that
machine is sorely underutilized?

> And if you want to be nice to the other users, then just that, be
> nice, using the nice(1) command:
>   nice  my-big-cpu-very-intensive-program

Since all my heavy computations are from a single CMUCL REP, I
guess I need to say "nice lisp" when starting up in the morning,
and leave it that way all day until I'm ready to (quit) and go to
bed?

> Another tool you may use is batch(1), which enqueues jobs, and
> will start them only when the load average is low enough.

That's worthless for my purpose, unless you know a way to use it to
deal with interactive Lisp session.

> If you had a lot of little tasks to execute, you could use batch.

If I have a lot of little tasks to execute, I put them into a PROGN
or LIST, and if I'm going to do the same sequence of tasks many
times over different days then I might put them into a DEFUN.
For example, here is the LIST of tasks I do when I first start up CMUCL:
(list
 (setq *gc-verbose* nil)
 (unless (fboundp 'funct+shortdir+fn1-mayload)
    (load "/home/users/rem/LispWork/2007-2-mayload.lisp"))
 (funct+shortdir+fn1-mayload 'filenamebase-std-roll-before :LISP "2007-2-roll")
 (funct+shortdir+fn1-mayload 'load-file-by-method :LISP "2005-8-readers")
 (funct+shortdir+fn1-mayload 'dirspec+filnam+method+globsym-may-load :LISP "2008-4-MayLoad")
 (funct+shortdir+fn1-mayload 'make-empty-heap :LISP "2001-B-heap")
 (funct+shortdir+fn1-mayload 'string-read-words-batch-sort :LISP "2008-3-WordHist")
 (funct+shortdir+fn1-mayload 'phvec-normalize :LISP "2008-3-ProxHash")
 (funct+shortdir+fn1-mayload 'device-to-ar+mr+mc :LISP "2008-3-TextGraphics")
 (funct+shortdir+fn1-mayload 'trans-skills-lines :LISP "2008-3-TopPH")
 (funct+shortdir+fn1-mayload 'trans-skills-3d-012-links :LISP "2008-5-TopPH")
 )
I don't see how batch could possibly help me with those tasks.
From: Pascal J. Bourguignon
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <7cod77zk66.fsf@pbourguignon.anevia.com>
···················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t) writes:

>> From: ····@informatimago.com (Pascal J. Bourguignon)
>> On a linux system you can have a look at /proc/cpuinfo.
>
> That does't help me on my shell account running FreeBSD Unix.
>
>> cat /proc/cpuinfo
>
> % more /proc/cpuinfo
> /proc/cpuinfo: No such file or directory
>
>> To know the load of a system, the right tool is obviously:
>>     loadavg
>
> % loadavg
> loadavg: Command not found.
>
>> well, it may not exist (anymore) in which case, again obviously:
>>     uptime
>
> % uptime
>  5:01AM  up 3 days, 25 mins, 9 users, load averages: 5.11, 6.07, 6.49
> (Oh oh, there's that cliche about third time ... charm?)
> Anyway, with load around 5 or 6 it looks too busy for me to run any
> major CPU-speed benchmark.

Well,  5:00 AM, is often the time at which the last cron tasks of the
night are just started.  Try it at another random time.


>> would give you the load average such as:
>> 14:44:30 up  4:39,  2 users,  load average: 0.05, 0.04, 0.13
>
> I don't know what machine gave you *those* numbers, but that
> machine is sorely underutilized?

Not all the time :-)  Also it's a quad-core, so the average is about
divided by four, compared to a normal processor.



>> And if you want to be nice to the other users, then just that, be
>> nice, using the nice(1) command:
>>   nice  my-big-cpu-very-intensive-program
>
> Since all my heavy computations are from a single CMUCL REP, I
> guess I need to say "nice lisp" when starting up in the morning,
> and leave it that way all day until I'm ready to (quit) and go to
> bed?

Not necessarily.  But you can only increase the nicety level of your processes.

$ clisp  -x '(loop :for i :from 0 :do (* i i))' >/dev/null 2>&1 & 
[2] 13589
$ renice +4 13589
13589: old priority 0, new priority 4
$ renice +6 13589
13589: old priority 4, new priority 6
$ renice +2 13589
renice: 13589: setpriority: Permission denied
$ renice +5 13589
renice: 13589: setpriority: Permission denied
$ renice +6 13589
13589: old priority 6, new priority 6
$ renice +7 13589
13589: old priority 6, new priority 7

Only root can reduce it.

So you could start with a normal nicety, do your loading and
compilation full speed, and once you launch your computation, you can
detach your process and renice it.


>> Another tool you may use is batch(1), which enqueues jobs, and
>> will start them only when the load average is low enough.
>
> That's worthless for my purpose, unless you know a way to use it to
> deal with interactive Lisp session.

Indeed, that would be useful only if you could cut your task in
smaller batches that could be run independently and without
supervision.

>> If you had a lot of little tasks to execute, you could use batch.
>
> If I have a lot of little tasks to execute, I put them into a PROGN
> or LIST, and if I'm going to do the same sequence of tasks many
> times over different days then I might put them into a DEFUN.
> For example, here is the LIST of tasks I do when I first start up CMUCL:
> (list
>  (setq *gc-verbose* nil)
>  (unless (fboundp 'funct+shortdir+fn1-mayload)
>     (load "/home/users/rem/LispWork/2007-2-mayload.lisp"))
>  (funct+shortdir+fn1-mayload 'filenamebase-std-roll-before :LISP "2007-2-roll")
>  (funct+shortdir+fn1-mayload 'load-file-by-method :LISP "2005-8-readers")
>  (funct+shortdir+fn1-mayload 'dirspec+filnam+method+globsym-may-load :LISP "2008-4-MayLoad")
>  (funct+shortdir+fn1-mayload 'make-empty-heap :LISP "2001-B-heap")
>  (funct+shortdir+fn1-mayload 'string-read-words-batch-sort :LISP "2008-3-WordHist")
>  (funct+shortdir+fn1-mayload 'phvec-normalize :LISP "2008-3-ProxHash")
>  (funct+shortdir+fn1-mayload 'device-to-ar+mr+mc :LISP "2008-3-TextGraphics")
>  (funct+shortdir+fn1-mayload 'trans-skills-lines :LISP "2008-3-TopPH")
>  (funct+shortdir+fn1-mayload 'trans-skills-3d-012-links :LISP "2008-5-TopPH")
>  )
> I don't see how batch could possibly help me with those tasks.

You could fork a lisp process for each of these tasks, suspend them by
sending them a STOP signal, and batch a kill -CONT $PID command to
resume them when the system load is low enough for the batch commands
to be run.

;; (pseudo-code, I don't have cmucl installed on this computer to check all of it).

(progn (setq *gc-verbose* nil)
      (unless (fboundp 'funct+shortdir+fn1-mayload)
        (load "/home/users/rem/LispWork/2007-2-mayload.lisp"))
      (loop
         :with pid = 0
         :for task :in (list
                        (lambda () (funct+shortdir+fn1-mayload 'filenamebase-std-roll-before :LISP "2007-2-roll"))
                        (lambda () (funct+shortdir+fn1-mayload 'load-file-by-method :LISP "2005-8-readers"))
                        (lambda () (funct+shortdir+fn1-mayload 'dirspec+filnam+method+globsym-may-load :LISP "2008-4-MayLoad"))
                        (lambda () (funct+shortdir+fn1-mayload 'make-empty-heap :LISP "2001-B-heap"))
                        (lambda () (funct+shortdir+fn1-mayload 'string-read-words-batch-sort :LISP "2008-3-WordHist"))
                        (lambda () (funct+shortdir+fn1-mayload 'phvec-normalize :LISP "2008-3-ProxHash"))
                        (lambda () (funct+shortdir+fn1-mayload 'device-to-ar+mr+mc :LISP "2008-3-TextGraphics"))
                        (lambda () (funct+shortdir+fn1-mayload 'trans-skills-lines :LISP "2008-3-TopPH"))
                        (lambda () (funct+shortdir+fn1-mayload 'trans-skills-3d-012-links :LISP "2008-5-TopPH")))
         :do (if (zerop (setf pid (unix:unix-fork)))
                 (progn
                   (unix:unix-kill 0 (unix:unix-signal-number :sigsstop))
                   (funcall task)
                   (unix:unix-exit 0))
                 (progn
                   (sleep 1)
                   (with-open-stream (cmd (ext:run-program "batch" :input :stream))
                     (format t "kill -CONT ~A~%" pid))))))


But you'll probably get better result with a nice process.  It doesn't
run slower, but is only less prioritary that the other users'
processes.  

In any case, for long running computing processes, you should probably
try to set up things to avoid a need for interactive use once the
computations are started.  Save the results (and intermediary results)
to files, to be able to restart from new processes, etc.  A good model
would be the ····@home 
Have a look at Boinc too: http://en.wikipedia.org/wiki/BOINCProject.

-- 
__Pascal Bourguignon__
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Nice processes on Unix (was: Parallel Common-Lisp with at least 64 processors?)
Date: 
Message-ID: <rem-2008jun05-007@yahoo.com>
> > % uptime
> >  5:01AM  up 3 days, 25 mins, 9 users, load averages: 5.11, 6.07, 6.49
> > Anyway, with load around 5 or 6 it looks too busy for me to run any
> > major CPU-speed benchmark.

> From: ····@informatimago.com (Pascal J. Bourguignon)
> Well,  5:00 AM, is often the time at which the last cron tasks of
> the night are just started.  Try it at another random time.

% uptime
12:48PM  up  9:06, 18 users, load averages: 2.64, 4.13, 5.14

If it's a quad core, then it looks like one core is sitting totally
idle at the moment, and the other three are not quite fully
occupied. But if it's single core, it's pretty busy now. Is that
correct, or do I misunderstand?

What Unix command can be used to find out how many cores this
computer system has?

> >> would give you the load average such as:
> >> 14:44:30 up  4:39,  2 users,  load average: 0.05, 0.04, 0.13
> >
> > I don't know what machine gave you *those* numbers, but that
> > machine is sorely underutilized?
> Not all the time :-)  Also it's a quad-core, so the average is about
> divided by four, compared to a normal processor.

Do you mean the number reported by uptime is only 1/4 of the actual
load? So that wouldn't help me if my own ISP were quad core, the
2.64 reported actually means 10.56 processes competing for those
four cores? I don't suppose you know of a Web-based tutorial that
explains these sorts of things so that I wouldn't need to ask you
so many questions?

> ... you can only increase the nicety level of your processes.
> $ clisp  -x '(loop :for i :from 0 :do (* i i))' >/dev/null 2>&1 &
(actually cmucl in my case; anyway, at this point I would do
 various interactive stuff to debug code in preparation for a major
 computation)
> [2] 13589
> $ renice +4 13589
> 13589: old priority 0, new priority 4
> $ renice +6 13589
> 13589: old priority 4, new priority 6
(at this point I would run a major computation;
 done with that, now I want to work interactively again, but I can't!!)
> $ renice +2 13589
> renice: 13589: setpriority: Permission denied
> $ renice +5 13589
> renice: 13589: setpriority: Permission denied
(so basically I can set up a major computation ro run nice *only*
 if it's the very last thing I want to do during my Lisp session)

> So you could start with a normal nicety, do your loading and
> compilation full speed, and once you launch your computation, you
> can detach your process and renice it.

I don't know how to detach a process, unless that process is 'screen',
whereby C-A d detaches it.
The 'detach' command doesn't detach an existing process, instead it
starts a brand new process in detached mode, which would provide me
no way to take the Lisp environment I already have built up and
then simply call one function that invokes a major compute task.

> > If I have a lot of little tasks to execute, I put them into a PROGN
> > or LIST, and if I'm going to do the same sequence of tasks many
> > times over different days then I might put them into a DEFUN. ...

> You could fork a lisp process for each of these tasks, suspend them by
> sending them a STOP signal, and batch a kill -CONT $PID command to
> resume them when the system load is low enough for the batch commands
> to be run.

That's not ANSI-CL.
I'd need to find out how to do all that in CMU Common Lisp 18b.
But for lots of little Lisp tasks that load files or otherwise
build up a single Lisp environment, such as the example I posted,
it'd be worthless to me.

> In any case, for long running computing processes, you should
> probably try to set up things to avoid a need for interactive use
> once the computations are started.

Actually whenever I get anything debugged and canned to that
degree, with all the set-up from a barebones CMUCL included in the
canned task, I usually interface it to CGI. That automatically runs
it in a separate process, and it's only a click away from a Web
page so I don't have to remember where I put text for the toplevel
function call and copy&paste from there to the REP, and I can have
all such canned tasks in a nice heirarchial menu
 (single nested UL/LI, or multiple Web pages in a tree, however I
  want the tasks organized).

> A good model would be the ····@home

Setting up a system for automatically downloading tasks to milions
of personal computers around the world seems like overkill.

> Have a look at Boinc too: http://en.wikipedia.org/wiki/BOINCProject.
   Wikipedia does not have an article with this exact name. Please search
   for BOINCProject in Wikipedia to check for alternative titles or
   spellings.
->
Search results
   You searched for BOINCProject [Index]
No article title matches

     * Search for "BOINCProject" in existing articles.
->
No page text matches
From: Pascal J. Bourguignon
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <7czlpz7zcg.fsf@pbourguignon.anevia.com>
··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) writes:

>> > % uptime
>> >  5:01AM  up 3 days, 25 mins, 9 users, load averages: 5.11, 6.07, 6.49
>> > Anyway, with load around 5 or 6 it looks too busy for me to run any
>> > major CPU-speed benchmark.
>
>> From: ····@informatimago.com (Pascal J. Bourguignon)
>> Well,  5:00 AM, is often the time at which the last cron tasks of
>> the night are just started.  Try it at another random time.
>
> % uptime
> 12:48PM  up  9:06, 18 users, load averages: 2.64, 4.13, 5.14
>
> If it's a quad core, then it looks like one core is sitting totally
> idle at the moment, and the other three are not quite fully
> occupied. But if it's single core, it's pretty busy now. Is that
> correct, or do I misunderstand?
>
> What Unix command can be used to find out how many cores this
> computer system has?

That would depend on the unix system you have.

On linux, you can do: cat /proc/cpuinfo
or use top, and type 1 to get per cpu stats.


On MacOSX, IIRC, host_info will do.


>> >> would give you the load average such as:
>> >> 14:44:30 up  4:39,  2 users,  load average: 0.05, 0.04, 0.13
>> >
>> > I don't know what machine gave you *those* numbers, but that
>> > machine is sorely underutilized?
>> Not all the time :-)  Also it's a quad-core, so the average is about
>> divided by four, compared to a normal processor.
>
> Do you mean the number reported by uptime is only 1/4 of the actual
> load? 

No, I was wrong, sorry.  Those numbers are not divided by the number
or processors (at least on linux).  (See my answer to Madhu).


> So that wouldn't help me if my own ISP were quad core, the
> 2.64 reported actually means 10.56 processes competing for those
> four cores? 

If your system is a quad-core, and you have a lav of 2.64, it means
that 4-2.64=1.36 cores are left unused.

> I don't suppose you know of a Web-based tutorial that
> explains these sorts of things so that I wouldn't need to ask you
> so many questions?

man uptime
man loadavg
and some experiments (easier done when you 'own' the system).
reading the linux or freebsd sources.

But I guess google could find some web page explaining them too.


>> So you could start with a normal nicety, do your loading and
>> compilation full speed, and once you launch your computation, you
>> can detach your process and renice it.
>
> I don't know how to detach a process, unless that process is 'screen',
> whereby C-A d detaches it.

Yes, I would advise screen anyways.  An alternative for server (lisp)
processes would be detachtty, which like screen allows for reattaching
them.

Otherwise, in general, in bash, you can detach any job with the disown
built-in command.  

in bash:

    lisp -x '(load "batch-run.lisp")' & disown

or if you start interactively, suspend the process (usually C-z), and then:

    bg     # to let it run in background and
    disown # to detach it

however, for interactive processes, they usually try to read to the
tty, so detaching them might be more difficult (ideally, they should
close stdin/stdout/stderr).


> The 'detach' command doesn't detach an existing process, instead it
> starts a brand new process in detached mode, which would provide me
> no way to take the Lisp environment I already have built up and
> then simply call one function that invokes a major compute task.

Perhaps: 

   detach lisp -x  '(load "batch-run.lisp")'


>> You could fork a lisp process for each of these tasks, suspend them by
>> sending them a STOP signal, and batch a kill -CONT $PID command to
>> resume them when the system load is low enough for the batch commands
>> to be run.
>
> That's not ANSI-CL.

Indeed not.  But that's still in your lisp image. Use APROPOS,
DESCRIBE, DOCUMENTATION, and read the user manual of your
implementation.


>> Have a look at Boinc too: http://en.wikipedia.org/wiki/BOINCProject.
>    Wikipedia does not have an article with this exact name. Please search
>    for BOINCProject in Wikipedia to check for alternative titles or
>    spellings.
> ->
> Search results
>    You searched for BOINCProject [Index]
> No article title matches
>
>      * Search for "BOINCProject" in existing articles.
> ->
> No page text matches

Indeed. I don't know why this 'Project' stem was added here.
Try rather:  http://en.wikipedia.org/wiki/BOINC

You could have guessed it yourself, if a word made of two words glued
together doesn't give any hit, try the two words separated with
spaces, then try each word in turn alone.

I thought you where smart and a programmer.  If you don't have these
rules already in your brain, you can add (program) them yourself! ;-)

-- 
__Pascal Bourguignon__
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <rem-2008jun07-001@yahoo.com>
> >> Have a look at Boinc too: http://en.wikipedia.org/wiki/BOINCProject.
> >    Wikipedia does not have an article with this exact name. Please search
> >    for BOINCProject in Wikipedia to check for alternative titles or
> >    spellings.
> > ->
> > Search results
> >    You searched for BOINCProject [Index]
> > No article title matches
> >      * Search for "BOINCProject" in existing articles.
> > ->
> > No page text matches
> From: ····@informatimago.com (Pascal J. Bourguignon)
> Indeed. I don't know why this 'Project' stem was added here.
> Try rather:  http://en.wikipedia.org/wiki/BOINC
> You could have guessed it yourself, if a word made of two words
> glued together doesn't give any hit, try the two words separated
> with spaces, then try each word in turn alone.

Unfortunately "boink" by itself is a code word in singles
organizations having to with social gatherings, sexual intercourse,
and penguins, including fist-punching toy penguins in lieu of
punching real people as the Three Stooges often did. I anticipated
that the variant spelling would turn up similar matches on Google
so the search for that word by itself wasn't worth doing.

> I thought you where smart and a programmer.  If you don't have
> these rules already in your brain, you can add (program) them
> yourself! ;-)

I have a rule in my brain that if a particular word has an obscene
meaning, then using that word as a sole search term to find
something totally unrelated is a waste of time and a likely harm to
my mental state.
From: Pascal J. Bourguignon
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <878wxgdrhc.fsf@hubble.informatimago.com>
··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) writes:

>> >> Have a look at Boinc too: http://en.wikipedia.org/wiki/BOINCProject.
>> >    Wikipedia does not have an article with this exact name. Please search
>> >    for BOINCProject in Wikipedia to check for alternative titles or
>> >    spellings.
>> > ->
>> > Search results
>> >    You searched for BOINCProject [Index]
>> > No article title matches
>> >      * Search for "BOINCProject" in existing articles.
>> > ->
>> > No page text matches
>> From: ····@informatimago.com (Pascal J. Bourguignon)
>> Indeed. I don't know why this 'Project' stem was added here.
>> Try rather:  http://en.wikipedia.org/wiki/BOINC
>> You could have guessed it yourself, if a word made of two words
>> glued together doesn't give any hit, try the two words separated
>> with spaces, then try each word in turn alone.
>
> Unfortunately "boink" by itself is a code word in singles
> organizations having to with social gatherings, sexual intercourse,
> and penguins, including fist-punching toy penguins in lieu of
> punching real people as the Three Stooges often did. I anticipated
> that the variant spelling would turn up similar matches on Google
> so the search for that word by itself wasn't worth doing.
>
>> I thought you where smart and a programmer.  If you don't have
>> these rules already in your brain, you can add (program) them
>> yourself! ;-)
>
> I have a rule in my brain that if a particular word has an obscene
> meaning, then using that word as a sole search term to find
> something totally unrelated is a waste of time and a likely harm to
> my mental state.

I don't know what  you're talking about.


···@informatimago.com | My Account | Sign out
Google	
	  Advanced Search
  Preferences
 Web    Video   	Results 1 - 100 of about 14,300,000 for boinc. (0.33 seconds) 
BOINC
BOINC is an open-source software platform for computing using volunteered resources.
boinc.berkeley.edu/ - 11k - Cached - Similar pages - Note this
Download
Choose
Trac
····@Home
	
Account Manager
Top 100
Web
Add-Ons Page
More results from berkeley.edu �
BOINC: compute for science
BOINC is a program that lets you donate your idle computer time to science projects like ····@home, Climateprediction.net, ·······@home, World Community ...
boinc.berkeley.edu/download.php - 8k - Cached - Similar pages - Note this
Berkeley Open Infrastructure for Network Computing - Wikipedia ...
The Berkeley Open Infrastructure for Network Computing (BOINC) is a non-commercial middleware system for volunteer and grid computing. ...
en.wikipedia.org/wiki/BOINC - 45k - Cached - Similar pages - Note this
Berkeley Open Infrastructure for Network Computing - Wikipedia ...
The Berkeley Open Infrastructure for Network Computing (BOINC) is a non-commercial middleware system for volunteer computing, originally developed to ...
en.wikipedia.org/wiki/Berkeley_Open_Infrastructure_for_Network_Computing - 44k - Cached - Similar pages - Note this
More results from en.wikipedia.org �



That said, I've got the impression that google filters or orders
results depending on each user history. If you've been clicking a lot
on CS links, you should get them on top of your next results.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

HEALTH WARNING: Care should be taken when lifting this product,
since its mass, and thus its weight, is dependent on its velocity
relative to the user.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: BOINC (was: Nice processes on Unix)
Date: 
Message-ID: <rem-2008jun09-004@yahoo.com>
> > Unfortunately "boink" by itself is a code word in singles
> > organizations having to with social gatherings, sexual intercourse,
> > and penguins, including fist-punching toy penguins in lieu of
> > punching real people as the Three Stooges often did. I anticipated
> > that the variant spelling would turn up similar matches on Google
> > so the search for that word by itself wasn't worth doing.
> >> I thought you where [sic] smart and a programmer.  If you don't have
> >> these rules already in your brain, you can add (program) them
> >> yourself! ;-)
> > I have a rule in my brain that if a particular word has an obscene
> > meaning, then using that word as a sole search term to find
> > something totally unrelated is a waste of time and a likely harm to
> > my mental state.
> From: ····@informatimago.com (Pascal J. Bourguignon)
> I don't know what you're talking about.

 <http://groups.google.com/group/ba.singles/msg/a8869c3343ec6383?hl=en&dmode=source>
Search for words "penguin" and "boink".

Also take a glance at:
 <http://boinkmagazine.com/>
 <http://www.urbandictionary.com/define.php?term=boink>
 <http://www.starma.com/penis/richardkitty/richardkitty.html>
 <http://www.randyrants.com/2002/08/boink_boinkity.html>

> BOINC is an open-source software platform for computing using
> volunteered resources.
..
> BOINC is a program that lets you donate your idle computer time
> to science projects like ····@home, Climateprediction.net,
> ·······@home, World Community ...

If and when I have some software that I don't making public, for
which I need massive amounts of parallel processing, that might be
an option. But my original asking if somebody had access to a
massive parallel computer was more for running a test of
proprietary software that I wouldn't want released to public access
on millions of computers around the world. There'd be an agreement
that I could run my software on such a machine and that the person
hosting it wouldn't steal my algorithms.

> That said, I've got the impression that google filters or orders
> results depending on each user history. If you've been clicking a
> lot on CS links, you should get them on top of your next results.

How would Google have the slightest idea who I am when I start up
lynx from my shell account on a commercial ISP and type google.com
and submit a search?? Does the ISP's sysadmin covertly reveal to
Google which account is making each HTTP/TCP/IP connection to
google?? I don't believe it's happening. Do you have evidence to
support your (IMO) ridiculous claim?
From: Pascal J. Bourguignon
Subject: Re: BOINC
Date: 
Message-ID: <87prqq9tq9.fsf@hubble.informatimago.com>
··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) writes:
>  <http://groups.google.com/group/ba.singles/msg/a8869c3343ec6383?hl=en&dmode=source>
> Search for words "penguin" and "boink".
>
> Also take a glance at:
>  <http://boinkmagazine.com/>
>  <http://www.urbandictionary.com/define.php?term=boink>
>  <http://www.starma.com/penis/richardkitty/richardkitty.html>
>  <http://www.randyrants.com/2002/08/boink_boinkity.html>

Why do you keep orienting the discussion in this direction?

>> BOINC is an open-source software platform for computing using
>> volunteered resources.

> How would Google have the slightest idea who I am when I start up
> lynx from my shell account on a commercial ISP and type google.com
> and submit a search?? Does the ISP's sysadmin covertly reveal to
> Google which account is making each HTTP/TCP/IP connection to
> google?? I don't believe it's happening. Do you have evidence to
> support your (IMO) ridiculous claim?

1- Cookies.
2- The Identificatoin protocol RFC-1413.  Are you sure your ISP disabled it?



-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Wanna go outside.
Oh, no! Help! I got outside!
Let me back inside!
From: Madhu
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m3d4mvjch4.fsf@meer.net>
* (Robert Maas, http://tinyurl.com/uh3t) <·················@yahoo.com> :
Wrote on Thu, 05 Jun 2008 14:36:42 -0700:

| % uptime
| 12:48PM  up  9:06, 18 users, load averages: 2.64, 4.13, 5.14
|
| If it's a quad core, then it looks like one core is sitting totally
| idle at the moment, and the other three are not quite fully
| occupied. But if it's single core, it's pretty busy now. Is that
| correct, or do I misunderstand?

`loadavg' is a measure of contention -- [see manpage definition below]
loosely it is the number of processes trying to run simultaneously
(contending for CPU) or waiting for IO (contending for Disk).  Someone
from google once told me that if this number is greater than 1 "your
system is in trouble", as the system is overloaded.

| What Unix command can be used to find out how many cores this
| computer system has?

See Rob Warnock's reply to you in this thread:
<····································@speakeasy.net>
<········································································@speakeasy.net>

Linux: cat /proc/cpuinfo should tell you.
Freebsd:  cat /usr/compat/linux/proc/cpuinfo

|> >> would give you the load average such as:
|> >> 14:44:30 up  4:39,  2 users,  load average: 0.05, 0.04, 0.13
|> >
|> > I don't know what machine gave you *those* numbers, but that
|> > machine is sorely underutilized?
|> Not all the time :-)  Also it's a quad-core, so the average is about
|> divided by four, compared to a normal processor.
|
| Do you mean the number reported by uptime is only 1/4 of the actual
| load? So that wouldn't help me if my own ISP were quad core, the
| 2.64 reported actually means 10.56 processes competing for those
| four cores? I don't suppose you know of a Web-based tutorial that
| explains these sorts of things so that I wouldn't need to ask you
| so many questions?

The number should be CPU independent.
I'm on linux Here you get the loadvg by doing `cat /proc/loadavg'
	=> 0.15 0.13 0.09 3/157 14446
The linux manpage explains it this way:

,---- man (5) proc
|   /proc/loadavg
|           The  first  three fields in this file are load average figures giving the
|           number of jobs in the run queue (state R) or waiting for disk I/O  (state
|           D)  averaged  over  1,  5, and 15 minutes.  They are the same as the load
|           average numbers given by uptime(1) and other programs.  The fourth  field
|           consists  of two numbers separated by a slash (/).  The first of these is
|           the number of currently executing kernel scheduling entities  (processes,
|           threads);  this  will  be  less than or equal to the number of CPUs.  The
|           value after the slash is the number of kernel  scheduling  entities  that
|           currently exist on the system.  The fifth field is the PID of the process
|           that was most recently created on the system.
 ---

--
Madhu
From: Tim Bradshaw
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <4915c2f3-d92f-44ec-9041-2edaef720d92@m45g2000hsb.googlegroups.com>
On Jun 6, 7:37 am, Madhu <·······@meer.net> wrote:

> `loadavg' is a measure of contention -- [see manpage definition below]
> loosely it is the number of processes trying to run simultaneously
> (contending for CPU) or waiting for IO (contending for Disk).  Someone
> from google once told me that if this number is greater than 1 "your
> system is in trouble", as the system is overloaded.

I'm not sure how much it varies between implementations, but
traditionally, and simplifying somewhat, the load average is the
number of processes which are not waiting for anything other than I/O
in order to run (ie this does not count processes which are waiting
for you to type something, say).

Thus a load average less than or equal to the number of cores in the
system means that everything is getting CPU time that wants it
(processes may still be starving for I/O of course).

In particular, what is a "bad" load average scales as the number of
cores: for many years of course Unix systems all had a single core,
but nowadays very many systems have more than one of course.  This
took me a while to get used to, but now I always check the number of
cores before panicking at a load of 25 or something...

(And "cores" is not correct: on systems such as Sun's CMT machines the
number of interest is the number of virtual processors, which is some
multiple of the number of cores (4 or 8, depending on generation I
think).)

-tim

--tim
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <3I6dnQr2w_f2itTVnZ2dnUVZ_tLinZ2d@speakeasy.net>
Tim Bradshaw  <··········@tfeb.org> wrote:
+---------------
| On Jun 6, 7:37 am, Madhu <·······@meer.net> wrote:
| > `loadavg' is a measure of contention -- [see manpage definition below]
| > loosely it is the number of processes trying to run simultaneously
| > (contending for CPU) or waiting for IO (contending for Disk).  Someone
| > from google once told me that if this number is greater than 1 "your
| > system is in trouble", as the system is overloaded.
| 
| I'm not sure how much it varies between implementations, but
| traditionally, and simplifying somewhat, the load average is the
| number of processes which are not waiting for anything other than I/O
| in order to run (ie this does not count processes which are waiting
| for you to type something, say).
+---------------

<FLAME value="on">
Unfortunately for computer science, historical practice, sanity,
and just plain common sense, Linux has decided to include in the
"load average" *all* processes that are *waiting* for *any* I/O
completions, whether from swapping disks or slow file disks or
networks or slow serial lines or paper tape readers! This results
in such nonsense, for example, as often seeing a "load average"
of 150+ on a *TOTALLY IDLE* NFS server that just happens to have
a large number of mounts on it. (*sigh*)

Whereas sane operating systems such as TOPS-10, TOPS-20, Irix, Solaris,
{Free,Net,Open}BSD, and many others include in the load average only
processes that are waiting to get a *CPU* on which to run.
</FLAME>


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Robert Uhl
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m38wx92wfg.fsf@latakia.octopodial-chrome.com>
Tim Bradshaw <··········@tfeb.org> writes:
>
> Thus a load average less than or equal to the number of cores in the
> system means that everything is getting CPU time that wants it
> (processes may still be starving for I/O of course).
>
> In particular, what is a "bad" load average scales as the number of
> cores: for many years of course Unix systems all had a single core,
> but nowadays very many systems have more than one of course.  This
> took me a while to get used to, but now I always check the number of
> cores before panicking at a load of 25 or something...

Wouldn't number of cores be irrelevant to the badness of the number of
_waiting_ (as opposed to _running_) processes?  I mean, if I have 14
processes waiting to run, it doesn't really matter if I have 16 cores or
1--I still have 14 processes which are not getting any work done.

Granted, with multiple cores those 14 processes will be gotten to more
quickly.

-- 
Robert Uhl <http://public.xdi.org/=ruhl>
Most people aren't thought about after they're gone.  `I wonder where
Bob got the plutonium' is better than most get.
From: Pascal J. Bourguignon
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <7c7id39en6.fsf@pbourguignon.anevia.com>
Madhu <·······@meer.net> writes:

> * (Robert Maas, http://tinyurl.com/uh3t) <·················@yahoo.com> :
> Wrote on Thu, 05 Jun 2008 14:36:42 -0700:
>
> | % uptime
> | 12:48PM  up  9:06, 18 users, load averages: 2.64, 4.13, 5.14
> |
> | If it's a quad core, then it looks like one core is sitting totally
> | idle at the moment, and the other three are not quite fully
> | occupied. But if it's single core, it's pretty busy now. Is that
> | correct, or do I misunderstand?
>
> `loadavg' is a measure of contention -- [see manpage definition below]
> loosely it is the number of processes trying to run simultaneously
> (contending for CPU) or waiting for IO (contending for Disk).  Someone
> from google once told me that if this number is greater than 1 "your
> system is in trouble", as the system is overloaded.

That's not exactly what I observe on my linux system.  Here, with 4
running processes, I still have a lav of 3.47: 

top - 09:52:13 up 8 days, 18:49,  6 users,  load average: 3.47, 2.03, 0.93
Tasks: 118 total,   5 running, 112 sleeping,   1 stopped,   0 zombie
Cpu0  : 93.4%us,  6.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.3%hi,  0.3%si,  0.0%st
Cpu1  : 96.0%us,  4.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 94.0%us,  6.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 94.0%us,  6.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   3921872k total,  2952724k used,   969148k free,   304024k buffers
Swap:  2000084k total,      160k used,  1999924k free,   942100k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30593 pjb       20   0 20624  952  368 R  100  0.0   1:45.94 bash
30590 pjb       20   0 20624  952  368 R   99  0.0   1:45.28 bash
30592 pjb       20   0 20624  952  368 R   99  0.0   1:46.29 bash
30591 pjb       20   0 20624  952  368 R   98  0.0   1:45.60 bash
27939 pjb       20   0  319m 207m  22m S    3  5.4 150:00.75 firefox-bin
 4786 root      20   0  579m 273m 5876 S    2  7.1 260:50.94 X
    1 root      20   0  3696  580  488 S    0  0.0   0:06.02 init
    2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd
    3 root      RT  -5     0    0    0 S    0  0.0   0:00.17 migration/0
    4 root      15  -5     0    0    0 S    0  0.0   0:02.03 ksoftirqd/0
    5 root      RT  -5     0    0    0 S    0  0.0   0:00.32 watchdog/0
    6 root      RT  -5     0    0    0 S    0  0.0   0:00.12 migration/1
    7 root      15  -5     0    0    0 S    0  0.0   0:01.87 ksoftirqd/1


Moreover, the trouble, on mono-processors linux systems only really
starts with lavs above 7.  Note that you can tune linux kernel
scheduling and responsiveness (the kernel preemption parameter).


> The number should be CPU independent. 

No.  Read it again:

> I'm on linux Here you get the loadvg by doing `cat /proc/loadavg'
> 	=> 0.15 0.13 0.09 3/157 14446
> The linux manpage explains it this way:
>
> ,---- man (5) proc
> |   /proc/loadavg
> |           The  first  three fields in this file are load average figures giving the
> |           number of jobs in the run queue (state R) or waiting for disk I/O  (state
> |           D)  averaged  over  1,  5, and 15 minutes.  They are the same as the load
> |           average numbers given by uptime(1) and other programs.  The fourth  field
> |           consists  of two numbers separated by a slash (/).  The first of these is
> |           the number of currently executing kernel scheduling entities  (processes,
> |           threads);  this  will  be  less than or equal to the number of CPUs.  The
> |           value after the slash is the number of kernel  scheduling  entities  that
> |           currently exist on the system.  The fifth field is the PID of the process
> |           that was most recently created on the system.
>  ---

The average is done over time, not over number of CPU.  If you have 4
R processes, and 4 processors, you have a situation that's 4 times
less critical than if you have 4 R processes and only 1 processor.


Another way to realize you're on a multi-processor system is time:

real    0m4.129s
user    0m10.136s
sys     0m0.890s

Mind it, the real time is less than the user+sys CPU times!  In this
example, it means that the program (several processes) ran on average
on  (/ (+ 10.136 0.890) 4.129) --> 2.67 processors.


-- 
__Pascal Bourguignon__
From: Espen Vestre
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m1od6eizoa.fsf@gazonk.netfonds.no>
···@informatimago.com (Pascal J. Bourguignon) writes:

> That's not exactly what I observe on my linux system.  Here, with 4
> running processes, I still have a lav of 3.47: 

Our LispWorks-delivered applications make the use of load numbers on
our servers almost useless, since (presumably) hundreds of threads
executing mp:process-wait-with-timeout create high load averages
while the cpus are really mostly idle.

The servers typically report a load average of 25-50 while the cpu is
95% idle.
-- 
  (espen)
From: Madhu
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m31w3bjb5f.fsf@meer.net>
* Madhu <··············@meer.net>  Wrote on Fri, 06 Jun 2008 12:07:11 +0530:

| I'm on linux Here you get the loadvg by doing `cat /proc/loadavg'
| 	=> 0.15 0.13 0.09 3/157 14446
| ,---- man (5) proc
| |   /proc/loadavg
[...]
| |       average numbers given by uptime(1) and other programs.  The fourth  field
| |       consists  of two numbers separated by a slash (/).  The first of these is
| |       the number of currently executing kernel scheduling entities  (processes,
| |       threads);  this  will  be  less than or equal to the number of CPUs.  The
| |       value after the slash is the number of kernel  scheduling  entities  that
| |       currently exist on the system.  The fifth field is the PID of the process
| |       that was most recently created on the system.
|  ---

Re. the fourth field reported on my single CPU non SMP system, `3/157',
that documentation is obviously incorrect: The number of currently
executing kernel scheduling entities (3) is NOT less than or equal the
number of CPUs on my system (1).
--
Madhu
From: Tim X
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <8763sm3vtf.fsf@lion.rapttech.com.au>
Madhu <·······@meer.net> writes:

> * (Robert Maas, http://tinyurl.com/uh3t) <·················@yahoo.com> :
> Wrote on Thu, 05 Jun 2008 14:36:42 -0700:
>
> | % uptime
> | 12:48PM  up  9:06, 18 users, load averages: 2.64, 4.13, 5.14
> |
> | If it's a quad core, then it looks like one core is sitting totally
> | idle at the moment, and the other three are not quite fully
> | occupied. But if it's single core, it's pretty busy now. Is that
> | correct, or do I misunderstand?
>
> `loadavg' is a measure of contention -- [see manpage definition below]
> loosely it is the number of processes trying to run simultaneously
> (contending for CPU) or waiting for IO (contending for Disk).  Someone
> from google once told me that if this number is greater than 1 "your
> system is in trouble", as the system is overloaded.

I've heard this before and I think its misleading. The way it was
explained to me is that a value less than 1 means that there were
spare/wasted CPU cycles i.e. cycles wehre nothing was waiting to run on
the cpu. A value of 1 means that every cpu cycle was being used. A value
above 1 indicates that there was some contention for cpu cycles. A value
of 2 would indicate that to have no contention/delay, you would need a
cpu with twice the capacity (i.e. double the cpus or possibly same
number of cpus, but operatiing at twice the speed).

However, all this is really just useful as a rough indicator. It is
rediculous to state that a system is in trouble once it gets a load
average of over 1. Other issues need to be considered, such as amount of
memory available, number of processes waiting in the queue and above
all, user expectations. I regularly see servers with averages over 5,
but its not an issue because user expectations regarding performance are
being met. 

From stuff I've read, I think the load average values you see have
become even less informative with the advent of multi-core systems. I do
remember seeing some debate on various lists on how to 'fix' things so
that multi-core systems were giving accurate indicators. I've not
monitored the debate for some time and I'm not sure what current
thinking is, but at the time, there was considerable debate regarding
the best way to give reliable and meaningful indicators on a systems
load. In the end, the best measure appears to be user expectation - if
response times are adequate, then they are adequate. If they are not,
then information such as load average can be useful, but should not be
taken in isolation. 

Some systems do provide more information with tools such as 'top' and
I'd recommend using that rather than just the values of load
average. Some versions of top and some operating systems will even show
individual CPU values etc.

Tim

-- 
tcross (at) rapttech dot com dot au
From: Tim Bradshaw
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <d8411603-22f0-4f05-9e0e-55e41d69013f@k30g2000hse.googlegroups.com>
On Jun 7, 1:58 am, Tim X <····@nospam.dev.null> wrote:

>
> I've heard this before and I think its misleading. The way it was
> explained to me is that a value less than 1 means that there were
> spare/wasted CPU cycles i.e. cycles wehre nothing was waiting to run on
> the cpu. A value of 1 means that every cpu cycle was being used. A value
> above 1 indicates that there was some contention for cpu cycles.

As others have said, this really isn't right.  Firstly average is
computed over the number of cores (or virtual CPUs in some sense),
and secondly the load is not just dependent on CPU utilisation but
also on I/O.  It is quite common to see machines with very high load
with almost no CPU utilisation, as they are starving for I/O.

> I do
> remember seeing some debate on various lists on how to 'fix' things so
> that multi-core systems were giving accurate indicators.

This is a non-problem, since the denominator of load is the number of
cores/virtual CPUs.
From: Raymond Wiker
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m2prqr203x.fsf@RawMBP.local>
Tim Bradshaw <··········@tfeb.org> writes:

> On Jun 7, 1:58 am, Tim X <····@nospam.dev.null> wrote:
>
>>
>> I've heard this before and I think its misleading. The way it was
>> explained to me is that a value less than 1 means that there were
>> spare/wasted CPU cycles i.e. cycles wehre nothing was waiting to run on
>> the cpu. A value of 1 means that every cpu cycle was being used. A value
>> above 1 indicates that there was some contention for cpu cycles.
>
> As others have said, this really isn't right.  Firstly average is
> computed over the number of cores (or virtual CPUs in some sense),
> and secondly the load is not just dependent on CPU utilisation but
> also on I/O.  It is quite common to see machines with very high load
> with almost no CPU utilisation, as they are starving for I/O.

	I think the definition of the load average is the number of
runnable process in the run queue - i.e, processes that are not
currently waiting for I/O or sleeping. Thus, I/O bound processes
should not count, but I could be wrong about this.
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <rZednX9ftK_yCNHVnZ2dnUVZ_gKdnZ2d@speakeasy.net>
Raymond Wiker  <···@RawMBP.local> wrote:
+---------------
| Tim Bradshaw <··········@tfeb.org> writes:
| > As others have said, this really isn't right.  Firstly average is
| > computed over the number of cores (or virtual CPUs in some sense),
| > and secondly the load is not just dependent on CPU utilisation but
| > also on I/O.  It is quite common to see machines with very high load
| > with almost no CPU utilisation, as they are starving for I/O.
| 
| 	I think the definition of the load average is the number of
| runnable process in the run queue - i.e, processes that are not
| currently waiting for I/O or sleeping. Thus, I/O bound processes
| should not count, but I could be wrong about this.
+---------------

Sorry, you *are* wrong about this... for Linux.
(But only Linux. Most other O/Ss get it right.)


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Raymond Wiker
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m2lk1e1pie.fsf@RawMBP.local>
····@rpw3.org (Rob Warnock) writes:

> Raymond Wiker  <···@RawMBP.local> wrote:
> +---------------
> | Tim Bradshaw <··········@tfeb.org> writes:
> | > As others have said, this really isn't right.  Firstly average is
> | > computed over the number of cores (or virtual CPUs in some sense),
> | > and secondly the load is not just dependent on CPU utilisation but
> | > also on I/O.  It is quite common to see machines with very high load
> | > with almost no CPU utilisation, as they are starving for I/O.
> | 
> | 	I think the definition of the load average is the number of
> | runnable process in the run queue - i.e, processes that are not
> | currently waiting for I/O or sleeping. Thus, I/O bound processes
> | should not count, but I could be wrong about this.
> +---------------
>
> Sorry, you *are* wrong about this... for Linux.
> (But only Linux. Most other O/Ss get it right.)

	If the other OS's get it right, and differ from Linux, and my
description differs from what Linux does, then I'm right, right? Right
:-)
From: Tim Bradshaw
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <32a03dd8-efe2-412c-9571-a0d39fdfe457@p25g2000hsf.googlegroups.com>
On Jun 9, 3:29 am, ····@rpw3.org (Rob Warnock) wrote:

> Sorry, you *are* wrong about this... for Linux.
> (But only Linux. Most other O/Ss get it right.)
>

Me or him?  I've never found cases where I/O bound processes don't
count towards load (but I don't manage Linux machines much nowadays).
Certainly it did on BSD, SunOS and does on Solaris.  There are other
tools (on Solaris anyway) to look at CPU usage or IO usage in
isolation).
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <keKdnWzXs6o-H9LVnZ2dnUVZ_vednZ2d@speakeasy.net>
Tim Bradshaw  <··········@tfeb.org> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) wrote:
| > Sorry, you *are* wrong about this... for Linux.
| 
| Me or him?
+---------------

He, Raymond Wiker, was wrong about the definition for Linux (only),
and I was disagreeing with him about that O/S only. I was *agreeing*
with Raymond that the definition of load of "number of runnable process
in the run queue - i.e, processes that are not currently waiting
for I/O or sleeping" [scaled by the number of available CPU cores,
of course] is the *correct* definition of "load". [Linux therefore
uses an *incorrect* definition of "load", IMNSHO.]

I also agree with Raymond that "I/O bound processes should not count",
except of course for whatever fraction of CPU time they *do* actually
consume when they're not waiting for I/O completion events. [Again,
Linux gets this wrong, resulting in totally idle servers being shown
with load averages >100 just because that many processes are waiting
for network traffic.]

+---------------
| I've never found cases where I/O bound processes don't count
| towards load (but I don't manage Linux machines much nowadays).
| Certainly it did on BSD, SunOS and does on Solaris.
+---------------

As far as I can tell from a quick look at the FreeBSD 6.2 scheduler code, 
only processes (well, threads) in CPU run queues are counted in "load".
[See the code in "/usr/src/sys/kernsched_ule.c" & thereabouts, especially
the calls to "kseq_load_add()" & "kseq_load_rem()".] If a process/thread
is taken out of the CPU run queues, its contribution to "load" is immediately
removed. AFAIK this is *always* the way BSD has worked, all the way back
to 4.1a-BSD [the earliest version that I dug into the kernel]. And from
observation of the behavior of servers with lots of processes in I/O wait,
this was the case for Irix as well.

I know [again, from direct observation] that this is most definitely
*NOT* the case on Linux.

[I cannot speak to SunOS/Solaris, due to lack of hands-on experience.]

So, yes, I guess I'm disagreeing with you, Tim.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Tim Bradshaw
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <1182cf0e-8ce1-498f-bca6-f851b93cb858@d45g2000hsc.googlegroups.com>
On Jun 11, 8:45 am, ····@rpw3.org (Rob Warnock) wrote:
>
> So, yes, I guess I'm disagreeing with you, Tim.

And that's because it turns out I was wrong, even for Solaris! Sorry!
Solaris computes the load average the way it has always been done (at
least back to BSD 4?), which is based on number of processes in the
run queue (so the way you think it should be done).  Linux (it seems)
uses some function of run queue & blocked for I/O queue.  So you're
right, and I was wrong.

What led me astray was that for a machine which is paging heavily,
then processes are in the run queue when in fact they are waiting for
disk (because they're waiting for memory which has been paged out).
It's *this* case where you can get huge load averages but tiny CPU
utilization.  Obviously that doesn't happen as much as it did, but I
remember we used to see this a lot for Lisps in GC on the BSD machines
where I first saw Unix (30 or 30MB images on a machine with 2MB of
real memory, which did very well except during GC).

And now I think I also remember what lead me even further astray. If a
process uses mmap to map a large file into its address apce, and then
accesses the resulting array, what does that count as?  If you're
naive it looks like paging and therefore this process looks like it's
runnable.  But actually it is blocked for I/O: in particular it's not
starving for memory.  Except, maybe it is, if the file it is paging
from is its own executable's text segment.

(This next bit is based on memory and may be wrong in places.) Solaris
uses mapped files pervasively, to the extent that normal memory
allocation is treated as mapping a file, which file happens to sit in
a special filesystem which is backed by swap.  For a long time this
gave rise to all sorts of problems, one of which was that systems
which were actually I/O bound could show up as having high load
averages.  Another, worse, problem was that processes doign intensive
I/O (which is most of the important ones on your average big database
box) could cause the VM system to start paging stuff that mattered
out, such as executables.  This all got fixed a few years ago by
making the system more aware of whether memory was backed by normal
files, or by anonymous swap, and I think by further treating read-only
executable file mappings specially.

Anyway, I was wrong, but I now why now.

--tim
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <0MWdncsq6vAUCtLVnZ2dnUVZ_sTinZ2d@speakeasy.net>
Tim Bradshaw  <··········@tfeb.org> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) wrote:
| > So, yes, I guess I'm disagreeing with you, Tim.
| 
| And that's because it turns out I was wrong, even for Solaris! Sorry!
| Solaris computes the load average the way it has always been done (at
| least back to BSD 4?), which is based on number of processes in the
| run queue (so the way you think it should be done).  Linux (it seems)
| uses some function of run queue & blocked for I/O queue.  So you're
| right, and I was wrong.
+---------------

Not a problem. The only reason I was being pretty stubborn about it
was that Linux is the *only* place I've ever seen these silly 100+
load averages on essentially idle systems!!

+---------------
| What led me astray was that for a machine which is paging heavily,
| then processes are in the run queue when in fact they are waiting for
| disk (because they're waiting for memory which has been paged out).
| It's *this* case where you can get huge load averages but tiny CPU
| utilization.  Obviously that doesn't happen as much as it did, but I
| remember we used to see this a lot for Lisps in GC on the BSD machines
| where I first saw Unix (30 or 30MB images on a machine with 2MB of
| real memory, which did very well except during GC).
+---------------

Hmmm... O.k., I suppose counting processes in the "Page-In" wait queue
[as TOPS-10 called it] as part of "load" isn't all that unreasonable.
I'll have to go look at the current FreeBSD code again and see if they
still do that.

+---------------
| And now I think I also remember what lead me even further astray. If a
| process uses mmap to map a large file into its address apce, and then
| accesses the resulting array, what does that count as?  If you're
| naive it looks like paging and therefore this process looks like it's
| runnable.  But actually it is blocked for I/O: in particular it's not
| starving for memory.  Except, maybe it is, if the file it is paging
| from is its own executable's text segment.
+---------------

Yeah, that one's a bit borderline, particularly since with the
proliferation of software that's based on some language VM
[hint, hint] the definition of "executable" is a bit broad.

+---------------
| (This next bit is based on memory and may be wrong in places.)
| Solaris uses mapped files pervasively, to the extent that normal memory
| allocation is treated as mapping a file, which file happens to sit in
| a special filesystem which is backed by swap.  For a long time this
| gave rise to all sorts of problems, one of which was that systems
| which were actually I/O bound could show up as having high load averages.
+---------------

Ah, yezz! Now that you mention it, I do seem to recall something
about that with some version of Sun systems.

Anyway, whether processes waiting to be re-paged-in get counted in
"load" or not, I think we can all probably agree that current Linux
does it wrong when an idle system has a 100+ load!  ;-}


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Mark Wooding
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <slrng4vdo7.ihm.mdw@metalzone.distorted.org.uk>
Rob Warnock <····@rpw3.org> wrote:

> Not a problem. The only reason I was being pretty stubborn about it
> was that Linux is the *only* place I've ever seen these silly 100+
> load averages on essentially idle systems!!

I've seen it on Irix, too.

I feel a need to set the record straight for Linux, by the way.  The
load average counts

  * processes which are runnable, and
  * processes which are blocked /uninterruptably/.

The number of places where a process can block uninterruptably is
actually quite small.  Just waiting for a network socket or terminal
won't do it.  Waiting for a local block device usually will, whether it
be for filesystem access or swapping.

The only point where this gets really strange is NFS: if you mount the
fileserver `hard' then the kernel will block uninterruptably for NFS
responses, and therefore processes waiting for NFS will appear to be
`loading' the system.  If you mount `soft' then this won't happen (and
you'll be able to kill processes stuck waiting for the NFS server to
resurrect itself) but you'll also run the risk of data loss on
read/write mounts.

Irix has the same definition of load, as far as I can tell: certainly I
remember seeing stupidly high load averages on perfectly responsive
systems.

Finally, if you don't like the Linux kernel's behaviour, kill the line

                uninterruptible += cpu_rq(i)->nr_uninterruptible;

in nr_active() (kernel/sched.c).

-- [mdw]
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <QeWdnf_Fmt3aH83VnZ2dnUVZ_jydnZ2d@speakeasy.net>
Mark Wooding  <···@distorted.org.uk> wrote:
+---------------
| Rob Warnock <····@rpw3.org> wrote:
| > Not a problem. The only reason I was being pretty stubborn about it
| > was that Linux is the *only* place I've ever seen these silly 100+
| > load averages on essentially idle systems!!
| 
| I've seen it on Irix, too.
+---------------

Really? I believe you; I've just never seen it myself
(and I worked at SGI for 13 years). Maybe I just never
ran the kind of thing that triggered it.

By the way, notice that I'm *not* talking about a system
that is busy doing a whole bunch of work with very little
consumption of CPU time, e.g., a big Lisp app that's
thrashing swap. Tim Bradshaw pointed out that "page-in wait"
is often counted as "runnable" (and thus appears in "load"),
and I can accept that as reasonable (for soem values of
"reasonable"). I'm talking about systems that are *idle*,
running *nothing* but the usual assortment of daemons,
yet with 100+ "load averages".

+---------------
| I feel a need to set the record straight for Linux, by the way.
| The load average counts
|   * processes which are runnable, and
|   * processes which are blocked /uninterruptably/.
...
| The only point where this gets really strange is NFS: if you mount the
| fileserver `hard' then the kernel will block uninterruptably for NFS
| responses, and therefore processes waiting for NFS will appear to be
| `loading' the system.
+---------------

Hmmm... This may be the clue, or at least related. The Linux
systems that I was complaining about were all NFS *servers*
which implemented server_NFS (or at least portions of it)
in the kernel. So at boot time they'd start up a bunch of
"nfsd" processes which would dive into the kernel and wait
to handle requests. It was probably these processes which
were viewed as "busy" even when they were completely idle.

[And, no, it was not necessary for a client to mount them
for the load average to skyrocket.]


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Madhu
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m3r6b3z3wv.fsf@meer.net>
* (Rob Warnock) <································@speakeasy.net> :
Wrote on Wed, 11 Jun 2008 04:14:49 -0500:

| Tim Bradshaw  <··········@tfeb.org> wrote:
| +---------------
| | ····@rpw3.org (Rob Warnock) wrote:
| | > So, yes, I guess I'm disagreeing with you, Tim.
| | 
| | And that's because it turns out I was wrong, even for Solaris! Sorry!
| | Solaris computes the load average the way it has always been done (at
| | least back to BSD 4?), which is based on number of processes in the
| | run queue (so the way you think it should be done).  Linux (it seems)
| | uses some function of run queue & blocked for I/O queue.  So you're
| | right, and I was wrong.
| +---------------
|
| Not a problem. The only reason I was being pretty stubborn about it
| was that Linux is the *only* place I've ever seen these silly 100+
| load averages on essentially idle systems!!

By essentially idle I assume you mean no process shows 100% CPU
utilization (using `top', say)

I have observed weird numbers across a number of linux kernels across
varying implementations of the VM, some buggy.  AIUI, typically weird
numbers were caused by IO on the VM system which tfb refers to [in the
next quoted section below -- linux uses this system heavily too].  In
some cases some badly behaving loadable modules [sound, fb] caused
threads which counted to the high load average without leaving a trace
on `ps' or `top' of the system.  These are still indicative of a
malfunctioning system, rather than a poor load average algorithm.

[BTW when I posted the bit upthread where I quoted someone from google,
 I did not mean to imply I believed what he said was correct, having
 observed otherwise --- the idea was that the point that Tim X is
 making, that the Original poster (Maas) could not use the load average
 numbers to estimate the stuff he was talking about]

Also, BTW I noticed a Load_Average.pdf on the net, of p.62 Issue 83, Oct
2007, www.linux-magazine.com. "Understanding load average and stretch
factors" by Neil Gunther, which talks about a `strech factor' measure
but AFAICT also misses the VM I/O consideration.

| +---------------
| | What led me astray was that for a machine which is paging heavily,
| | then processes are in the run queue when in fact they are waiting for
| | disk (because they're waiting for memory which has been paged out).
| | It's *this* case where you can get huge load averages but tiny CPU
| | utilization.  Obviously that doesn't happen as much as it did, but I
| | remember we used to see this a lot for Lisps in GC on the BSD machines
| | where I first saw Unix (30 or 30MB images on a machine with 2MB of
| | real memory, which did very well except during GC).
| |
| | And now I think I also remember what lead me even further astray. If a
| | process uses mmap to map a large file into its address apce, and then
| | accesses the resulting array, what does that count as?  If you're
| | naive it looks like paging and therefore this process looks like it's
| | runnable.  But actually it is blocked for I/O: in particular it's not
| | starving for memory.  Except, maybe it is, if the file it is paging
| | from is its own executable's text segment.
| +---------------
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <8oKdnR3eVMZdVc3VnZ2dnUVZ_tTinZ2d@speakeasy.net>
Madhu  <·······@meer.net> wrote:
+---------------
| Rob Warnock <····@rpw3.org> wrote:
| | Not a problem. The only reason I was being pretty stubborn about it
| | was that Linux is the *only* place I've ever seen these silly 100+
| | load averages on essentially idle systems!!
| 
| By essentially idle I assume you mean no process shows 100% CPU
| utilization (using `top', say)
+---------------

No, I mean that: (1) no process was showing *any* CPU usage
[except a fraction of a percent for "top"]; (2) there was
*no* network traffic [checked with "tcpdump"]; and (3) since
the box was a NAS filer, network traffic is the *only* potential
workload there is [no "user" apps at all, except my shell].
Oh, and the "nfsd" processes were all showing *zero* CPU
consumption. And yet the load average was between 80-250
[probably higher the more NFS filesystems were exported].


-Rob

p.s. By the way, the only reason I said "essentially idle" instead
of "absolutely 100% totally idle" is that there are a few logging
tasks that run for a small fraction of a second every 5 minutes,
but given that there was nothing for them to log... ;-}  ;-}

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Robert Uhl
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m34p7x2wa7.fsf@latakia.octopodial-chrome.com>
····@rpw3.org (Rob Warnock) writes:
>
> Not a problem. The only reason I was being pretty stubborn about it
> was that Linux is the *only* place I've ever seen these silly 100+
> load averages on essentially idle systems!!

You know, one could always submit a patch...

-- 
Robert Uhl <http://public.xdi.org/=ruhl>
Rack mount machines look nice, no doubt about it, but they were never
meant to be worked on.  If the fellow who invented them were anywhere
near me last night, he would be 1U high right now.       --Tom Liston
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <wImdnbLk2IwWe8nVnZ2dnUVZ_tninZ2d@speakeasy.net>
Robert Uhl  <·········@NOSPAMgmail.com> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) writes:
| > The only reason I was being pretty stubborn about it
| > was that Linux is the *only* place I've ever seen these
| > silly 100+ load averages on essentially idle systems!!
| 
| You know, one could always submit a patch...
+---------------

I believe the company I was working for at the time did just that,
but [IIRC] it was rejected by the Linux kernel core maintainers:
"It ain't a bug, it's a *FEE-CHURE*!"  (*sigh*)  :-{

One can only hope that preferences change over time, and
that this "feature" will someday be recognized as a bug.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Robert Uhl
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <m3bq2024c3.fsf@latakia.octopodial-chrome.com>
····@rpw3.org (Rob Warnock) writes:

> Robert Uhl  <·········@NOSPAMgmail.com> wrote:
> +---------------
> | ····@rpw3.org (Rob Warnock) writes:
> | > The only reason I was being pretty stubborn about it
> | > was that Linux is the *only* place I've ever seen these
> | > silly 100+ load averages on essentially idle systems!!
> | 
> | You know, one could always submit a patch...
> +---------------
>
> I believe the company I was working for at the time did just that,
> but [IIRC] it was rejected by the Linux kernel core maintainers:
> "It ain't a bug, it's a *FEE-CHURE*!"  (*sigh*)  :-{

Ah, well perhaps there's a good reason to do it differently, but I sure
as heck can't see it.

Brain-dead maintainers are a problem.  And it's hardly worth forking
over.

> One can only hope that preferences change over time, and
> that this "feature" will someday be recognized as a bug.

I'd actually be interested in why they consider it a feature (beyond
reflexive 'this is how it works, why change it?').

-- 
Robert Uhl <http://public.xdi.org/=ruhl>
Considering the number of wheels Microsoft has found reason to invent,
one never ceases to be baffled by the minuscule number whose shape
even vaguely resembles a circle.  --unknown
From: tortoise
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <f3e97553-6400-4f88-9810-248f643722ad@z24g2000prf.googlegroups.com>
Robert Uhl wrote:
> ····@rpw3.org (Rob Warnock) writes:
>
> > Robert Uhl  <·········@NOSPAMgmail.com> wrote:
> > +---------------
> > | ····@rpw3.org (Rob Warnock) writes:
> > | > The only reason I was being pretty stubborn about it
> > | > was that Linux is the *only* place I've ever seen these
> > | > silly 100+ load averages on essentially idle systems!!
> > |
> > | You know, one could always submit a patch...
> > +---------------
> >
> > I believe the company I was working for at the time did just that,
> > but [IIRC] it was rejected by the Linux kernel core maintainers:
> > "It ain't a bug, it's a *FEE-CHURE*!"  (*sigh*)  :-{
>
> Ah, well perhaps there's a good reason to do it differently, but I sure
> as heck can't see it.
>
> Brain-dead maintainers are a problem.  And it's hardly worth forking
> over.
>
> > One can only hope that preferences change over time, and
> > that this "feature" will someday be recognized as a bug.
>
> I'd actually be interested in why they consider it a feature (beyond
> reflexive 'this is how it works, why change it?').
>
> --
> Robert Uhl <http://public.xdi.org/=ruhl>
> Considering the number of wheels Microsoft has found reason to invent,
> one never ceases to be baffled by the minuscule number whose shape
> even vaguely resembles a circle.  --unknown

The i/o waits can be an important signal. but I never use that command
I use top. (Linux has a good top, macosx has a terrible one).

A lot of the cheap old machines I am forced to use came with clunky
old hard drives and even with upgrades still need better controller to
keep the slow old cpus out of like 30% wait states.

All this crappy commercial hardware these last few years. It is cheap
tho.

___________

Lisp is most waste time so long to learn so complicated. probably
runs fast but modern times again...
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <rem-2008jun30-005@yahoo.com>
> Date: Sun, 22 Jun 2008 22:53:49 -0700 (PDT)
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: tortoise <··········@gmail.com>
> I use top. (Linux has a good top, macosx has a terrible one).

I tried that on FreeBSD Unix. It refreshes every 2 seconds.
The % idle jumps wildly with no apparent pattern, looking like a
random value between 4% and 90% with no correlation from one sample
to the next. I don't think that's of any use to me in trying to
determine whether the one-minute or five-minute average load is
high or low so that I can know (or an automated script can know)
when is the best time to run a CPU-speed-test.


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/-------------------------------------------------------------------\
|          \/ _           _ ._ )_|_   _|o _|)| _ |_|  o_|_          |
|          / (_)|_|  \/\/(_)| |  |   (_||_\| |(_| _|  | | o         |
\---(Rendered by means of <http://www.schnoggo.com/figlet.html>)----/
  (You don't need JavaScript or images to see that ASCII-text image!!
      You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (20-30 chars) into this TextField:
          +------------------------------+
          |                              |
          +------------------------------+
From: Rob Warnock
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <K-KdnQ0fEbEWCfXVnZ2dnUVZ_uadnZ2d@speakeasy.net>
Robert Maas, <··················@spamgourmet.com.remove> wrote:
+---------------
| > From: tortoise <··········@gmail.com>
| > I use top. (Linux has a good top, macosx has a terrible one).
| 
| I tried that on FreeBSD Unix. It refreshes every 2 seconds.
+---------------

Try "top -s1" [or on Linux, "top -d1"] for once-per-second refresh.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: George Neuner
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <89h054hdnooigc59l15fsf02r09c1fdbpt@4ax.com>
On Wed, 11 Jun 2008 02:00:34 -0700 (PDT), Tim Bradshaw
<··········@tfeb.org> wrote:

>(This next bit is based on memory and may be wrong in places.) Solaris
>uses mapped files pervasively, to the extent that normal memory
>allocation is treated as mapping a file, which file happens to sit in
>a special filesystem which is backed by swap.  For a long time this
>gave rise to all sorts of problems, one of which was that systems
>which were actually I/O bound could show up as having high load
>averages.  Another, worse, problem was that processes doign intensive
>I/O (which is most of the important ones on your average big database
>box) could cause the VM system to start paging stuff that mattered
>out, such as executables.  

You're right at least on the history.  Solaris 2 definitely had these
problems.


>This all got fixed a few years ago by
>making the system more aware of whether memory was backed by normal
>files, or by anonymous swap, and I think by further treating read-only
>executable file mappings specially.

I'll take your word for it.  I haven't touched Solaris since 2.5.1

George
--
for email reply remove "/" from address
From: Tim X
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <873anns2wk.fsf@lion.rapttech.com.au>
Raymond Wiker <···@RawMBP.local> writes:

> Tim Bradshaw <··········@tfeb.org> writes:
>
>> On Jun 7, 1:58 am, Tim X <····@nospam.dev.null> wrote:
>>
>>>
>>> I've heard this before and I think its misleading. The way it was
>>> explained to me is that a value less than 1 means that there were
>>> spare/wasted CPU cycles i.e. cycles wehre nothing was waiting to run on
>>> the cpu. A value of 1 means that every cpu cycle was being used. A value
>>> above 1 indicates that there was some contention for cpu cycles.
>>
>> As others have said, this really isn't right.  Firstly average is
>> computed over the number of cores (or virtual CPUs in some sense),
>> and secondly the load is not just dependent on CPU utilisation but
>> also on I/O.  It is quite common to see machines with very high load
>> with almost no CPU utilisation, as they are starving for I/O.
>
> 	I think the definition of the load average is the number of
> runnable process in the run queue - i.e, processes that are not
> currently waiting for I/O or sleeping. Thus, I/O bound processes
> should not count, but I could be wrong about this.

I think it is operating system dependent. I think Solaris does this and
many of the 'older' Unix systems did. However, not sure if this holds
for Linux and have no idea with respect to BSD. It certainly was the
definition back in the late 80s/90s IIRC.

Tim

-- 
tcross (at) rapttech dot com dot au
From: Tim Bradshaw
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <84300333-fdf8-4b57-9b2c-b15bb57a0c4b@e39g2000hsf.googlegroups.com>
On Jun 8, 8:33 pm, Raymond Wiker <····@RawMBP.local> wrote:

>         I think the definition of the load average is the number of
> runnable process in the run queue - i.e, processes that are not
> currently waiting for I/O or sleeping. Thus, I/O bound processes
> should not count, but I could be wrong about this.

You are, sorry!  Watch a machine with some disk-hound application and
you will often see the processors almost entirely idle with a huge
load average,
From: Tim X
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <877iczs30x.fsf@lion.rapttech.com.au>
Tim Bradshaw <··········@tfeb.org> writes:

> On Jun 7, 1:58 am, Tim X <····@nospam.dev.null> wrote:
>
>>
>> I've heard this before and I think its misleading. The way it was
>> explained to me is that a value less than 1 means that there were
>> spare/wasted CPU cycles i.e. cycles wehre nothing was waiting to run on
>> the cpu. A value of 1 means that every cpu cycle was being used. A value
>> above 1 indicates that there was some contention for cpu cycles.
>
> As others have said, this really isn't right.  Firstly average is
> computed over the number of cores (or virtual CPUs in some sense),
> and secondly the load is not just dependent on CPU utilisation but
> also on I/O.  It is quite common to see machines with very high load
> with almost no CPU utilisation, as they are starving for I/O.
>

I obviously wasn't clear enough. I was providing a simplistic
explination and was trying to explain that relying on the load average
values alone to judge the load on a system was insufficient. It was
mainly in response to the point put forward by another poster that
stated they had been told that a load above 1 indicated the system was
'in trouble', which I think is misleading. 

>> I do
>> remember seeing some debate on various lists on how to 'fix' things so
>> that multi-core systems were giving accurate indicators.
>
> This is a non-problem, since the denominator of load is the number of
> cores/virtual CPUs.

Possibly now it is. At the time, this was not necessarily the case and
the debate I saw was on how to best represent a high level indicator of
load given the new cpu architectures. There were a number of arguments
against using a simple averaging approach. As has been seen from other
posts, the other problem is that different operating systems calculate
the load average figure differently and the value you see on one system
doesn't reflect the same calculation as you see on another. I also
suspect operating systems like Linux that allow you to select different
process scheduling and queuing algorithms probably further impact on the
accuracy of the result. 

In general, I think load average is only a very rough measure and you
have to use many other indicators when considering a systems
performance, such as how IO is factored in, memory use, disk IO, network
bandwidth and user expectations.

Tim
 

-- 
tcross (at) rapttech dot com dot au
From: Tim Bradshaw
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <21a685b9-52d5-4a8b-99e5-fdb5f8a38a73@k30g2000hse.googlegroups.com>
On Jun 9, 4:25 am, Tim X <····@nospam.dev.null> wrote:

> Possibly now it is.

I suspect you're talking about Linux or something.  I come from a
SunOS / solaris background and it's been that way since multiprocessor
support first appeared in the early 90s.  Other things have changed
radically (for instance until relatively recently Solaris made it all
but impossible to know what the memory pressure was like on a system),
but load hasn't changed in this world since BSD.
From: Tim X
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <871w34qt3z.fsf@lion.rapttech.com.au>
Tim Bradshaw <··········@tfeb.org> writes:

> On Jun 9, 4:25 am, Tim X <····@nospam.dev.null> wrote:
>
>> Possibly now it is.
>
> I suspect you're talking about Linux or something.  I come from a
> SunOS / solaris background and it's been that way since multiprocessor
> support first appeared in the early 90s.  Other things have changed
> radically (for instance until relatively recently Solaris made it all
> but impossible to know what the memory pressure was like on a system),
> but load hasn't changed in this world since BSD.

I think this thread sort of proves the main point I was trying to get
across, that load average is a very poor measure of anything when taken
in isolation. The fact there seems to be so many different opinions
regarding how it is calculated and what differs between different
flavors of unix just makes such measurements even less useful. To some
extent, it reminds me of all the arguments regarding MIPS calculations
and debates between the RISC and CISC camps of the late 80s/early 90s. 

Remember when Linux measured processing in 'bogomips' (i.e. bogus
MIPS)?





-- 
tcross (at) rapttech dot com dot au
From: Mark Wooding
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <slrng4vdso.ihm.mdw@metalzone.distorted.org.uk>
Tim X <····@nospam.dev.null> wrote:

> Remember when Linux measured processing in 'bogomips' (i.e. bogus
> MIPS)?

Err, it still does.

-- [mdw]
From: Tim X
Subject: Re: Nice processes on Unix
Date: 
Message-ID: <87tzfzozbb.fsf@lion.rapttech.com.au>
Mark Wooding <···@distorted.org.uk> writes:

> Tim X <····@nospam.dev.null> wrote:
>
>> Remember when Linux measured processing in 'bogomips' (i.e. bogus
>> MIPS)?
>
> Err, it still does.
>

I guess I've just not seen it amongst the absolutely huge amount of boot
messages linux now produces and at a speed that makes it impossible to
read at boot time. 

(I'm old enough to remember network speeds when you could just read the
file with cat and Linux kernel boots that took most of the night to
compile !)

tim



-- 
tcross (at) rapttech dot com dot au
From: Madhu
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <m3zlqrk1zn.fsf@meer.net>
* (Robert Maas, http://tinyurl.com/uh3t) <·················@yahoo.com> :
Wrote on Thu, 15 May 2008 05:19:49 -0700:

| % uptime
|  5:01AM  up 3 days, 25 mins, 9 users, load averages: 5.11, 6.07, 6.49
[...]
|> Another tool you may use is batch(1), which enqueues jobs, and
|> will start them only when the load average is low enough.
|
| That's worthless for my purpose, unless you know a way to use it to
| deal with interactive Lisp session.

The concept is simple enough to implement in a few lines of lisp

|> If you had a lot of little tasks to execute, you could use batch.
|
| If I have a lot of little tasks to execute, I put them into a PROGN
| or LIST, and if I'm going to do the same sequence of tasks many
| times over different days then I might put them into a DEFUN.
| For example, here is the LIST of tasks I do when I first start up CMUCL:
| (list
|  (setq *gc-verbose* nil)
|  (unless (fboundp 'funct+shortdir+fn1-mayload)
|     (load "/home/users/rem/LispWork/2007-2-mayload.lisp"))
|  (funct+shortdir+fn1-mayload 'filenamebase-std-roll-before :LISP "2007-2-roll")
|  (funct+shortdir+fn1-mayload 'load-file-by-method :LISP "2005-8-readers")
|  (funct+shortdir+fn1-mayload 'dirspec+filnam+method+globsym-may-load :LISP "2008-4-MayLoad")
|  (funct+shortdir+fn1-mayload 'make-empty-heap :LISP "2001-B-heap")
|  (funct+shortdir+fn1-mayload 'string-read-words-batch-sort :LISP "2008-3-WordHist")
|  (funct+shortdir+fn1-mayload 'phvec-normalize :LISP "2008-3-ProxHash")
|  (funct+shortdir+fn1-mayload 'device-to-ar+mr+mc :LISP "2008-3-TextGraphics")
|  (funct+shortdir+fn1-mayload 'trans-skills-lines :LISP "2008-3-TopPH")
|  (funct+shortdir+fn1-mayload 'trans-skills-3d-012-links :LISP "2008-5-TopPH")
|  )

(defvar *batch-job-queue* *)

| I don't see how batch could possibly help me with those tasks.
;; No need for batch:

(defun parse-uptime-load-average ()
  "Call external program `uptime' and return the first load average."
  (let* ((string
          (with-output-to-string (stream)
            (ext:run-program "uptime" nil :output stream)))
         (beg (position #\: string :from-end t))
         (end (position #\, string :start beg))
         *read-eval*)
    (read-from-string string t nil :start (1+ beg) :end end)))

(defvar +batch-threshold-loadavg+ 3.0
  "When the load average drops below this, execute jobs in the job queue")

(defvar +batch-periodic+ 60
  "Time interval in seconds.  Our batch timer process wakes up every
  +batch-periodic+ seconds to check on the load average.")

(defun run-batch-job-queue ()
  (loop (cond (*batch-job-queue*
               (cond ((< (parse-uptime-load-average) +batch-threshold-loadavg+)
                      (let ((form (pop *batch-job-queue*)))
                        (apply (car form) (cdr form)))) ; HANDLE-ERRORS HERE
                     (t (sleep +batch-periodic+))))
              (t (return 'NOJOBSLEFT)))))

(run-batch-job-queue)

;; no need to lock protect the queue for this use case. start it up in
;; the background with (mp:make-process #'run-batch-job-queue)
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Multiprocessing in CMUCL (was: Parallel Common-Lisp with at least 64 processors?)
Date: 
Message-ID: <rem-2008jun05-008@yahoo.com>
> From: Madhu <·······@meer.net>
> The concept is simple enough to implement in a few lines of lisp
[code sample snipped]

;; no need to lock protect the queue for this use case. start it up in
;; the background with (mp:make-process #'run-batch-job-queue)

> * (describe 'mp:make-process)
> MAKE-PROCESS is an external symbol in the MULTIPROCESSING package.
> Function documentation:
>   Make a process which will run function when it starts up. ...

I assume it means call the function with no arguments, i.e. APPLY
the function to the empty list? The documentation strings visible
by DESCRIBE in CMUCL are often not quite the whole truth, leave a
lot of important info unstated.

I wonder where this sort of stuff in CMUCL is documented? Google search:
  CMUCL MULTIPROCESSING package MAKE-PROCESS
lots of discussion, and some source code, but no documentation of that package
  CMUCL MULTIPROCESSING package MAKE-PROCESS documentation
not there either
  CMUCL MULTIPROCESSING package MAKE-PROCESS tutorial
not there either. I give up. Where is online documentation for it??
From: Madhu
Subject: Re: Multiprocessing in CMUCL
Date: 
Message-ID: <m38wxjjbpr.fsf@meer.net>
* (Robert Maas, http://tinyurl.com/uh3t) <·················@yahoo.com> :
Wrote on Thu, 05 Jun 2008 14:55:33 -0700:

|> * (describe 'mp:make-process)
|> MAKE-PROCESS is an external symbol in the MULTIPROCESSING package.
|> Function documentation:
|>   Make a process which will run function when it starts up. ...
|
| I assume it means call the function with no arguments, i.e. APPLY the
| function to the empty list? The documentation strings visible by
| DESCRIBE in CMUCL are often not quite the whole truth, leave a lot of
| important info unstated.
[...]
| not there either. I give up. Where is online documentation for it??

I'm afraid I found the comments in the source and the source are still
the most accurate/best documentation for CMUCL's MP implementation.

See <URL:http://www.trakt7.net/cmucl%20and%20multiprocessing>

It is based on the CLIM-SYS API which is specified in the CLIM
specification.  If you can live with the fact that there may be some
differences, see the clim-sys specification (mikemac.com site linked in
the above URL), or in a slightly nicer form at:
<URL:http://bauhh.dyndns.org:8000/clim-spec/B-2.html>

--
Madhu
From: Madhu
Subject: Re: Multiprocessing in CMUCL
Date: 
Message-ID: <m3ve0nhutk.fsf@meer.net>
* Madhu <··············@meer.net> Wrote on Fri, 06 Jun 2008 12:23:36 +0530:

| See <URL:http://www.trakt7.net/cmucl%20and%20multiprocessing>

I'm sorry I suggested this without any caveats and would like to take
this recommendation back.  [I mentioned the URL because it listed some
other links, but the page has too many faults, and I am not upto fixing
them] To wit

The docstrings listed here are based on an old CMUCL 18e (Early 2003?),
current CMUCL is 19e. Although MP implementation has not changed, the
particular build mentioned seems to be buggy:

- The listed docstrings are incomplete: MAKE-PROCESS for instance does
  not list the documentation on keyword arguments.

- It does not mention MULTIPROCESSING::STARTUP-IDLE-AND-TOP-LEVEL-LOOPS,
  which is required for any performant system.  (This has to be executed
  at the REPL at startup, and will start a new top level loop)

| CMUCLs MP implementation
| is based on the CLIM-SYS API which is specified in the CLIM
| specification.  If you can live with the fact that there may be some
| differences, see the clim-sys specification (mikemac.com site linked in
| the above URL), or in a slightly nicer form at:
| <URL:http://bauhh.dyndns.org:8000/clim-spec/B-2.html>

I've used and would recommend a mixture of the docstrings (which are
reasonably good) and the clim-spec as a guide to CMUCL's MP.  Be warned
that the implementation (x86 only) is reasonably stable for all common
uses but is not polished and would still be billed as `experimental' in
many circles.

--
Madhu
From: Rob Warnock
Subject: Re: Multiprocessing in CMUCL
Date: 
Message-ID: <yKCdnevzreFQh9TVnZ2dnUVZ_tPinZ2d@speakeasy.net>
Madhu  <·······@meer.net> wrote:
+---------------
| I've used and would recommend a mixture of the docstrings (which are
| reasonably good) and the clim-spec as a guide to CMUCL's MP.  Be warned
| that the implementation (x86 only) is reasonably stable for all common
| uses but is not polished and would still be billed as `experimental' in
| many circles.
+---------------

While technically true, I have used CMUCL's "processes"[1] in
production web applications servers which spawned a new process
for each HTTP request, and -- as far as I can tell -- in over
6 years and three sites there have been *no* crashes due to
instability in CMUCL's MP primitives.

But as usual, YMMV. In particular, I was careful to make sure
that in normal operation the CMUCL image doesn't have to handle
any Unix signals. E.g., *don't* use the (truly-experimental!!)
pre-emptive MP scheduling which relies on SIGALRM, but instead
use only the default cooperative scheduling [which uses the timeout
parameter in the "select()" system call]. *Don't* allow SIGPIPE;
set it to SIG_IGN and handle the EPIPE I/O error exception instead.
Etc., etc.


-Rob

[1] What CMUCL calls "multiprocessing" (a heritage from CLIM?)
    is what most other people would call "multiprogramming"
    or even simply or user-mode coroutines ("green threads").

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Rob Warnock
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <GtCdnQJvRYdj7rDVnZ2dnUVZ_qPinZ2d@speakeasy.net>
Robert Maas, <···················@SpamGourmet.Com> wrote:
+---------------
| > From: ····@informatimago.com (Pascal J. Bourguignon)
| > On a linux system you can have a look at /proc/cpuinfo.
| > cat /proc/cpuinfo
| 
| That does't help me on my shell account running FreeBSD Unix.
| % more /proc/cpuinfo
| /proc/cpuinfo: No such file or directory
+---------------

Actually, if your FreeBSD provider has mounted the "linprocfs"
pseudo-device then all you need to do is look in the right place:

    $ uname -rs
    FreeBSD 6.2-RELEASE-p4
    $ df | grep proc
    linprocfs     4      4     0   100%    /usr/compat/linux/proc
    procfs        4      4     0   100%    /proc
    $ cat /proc/cpuinfo
    cat: /proc/cpuinfo: No such file or directory
    $ cat /usr/compat/linux/proc/cpuinfo
    processor	: 0
    vendor_id	: AuthenticAMD
    cpu family	: 15
    model	: 1
    model name	: AMD Athlon(tm) 64 Processor 3500+
    stepping	: 0
    flags	: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 b19 mmx fxsr xmm b26
    cpu MHz	: 2211.34
    bogomips	: 2211.34
    $


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Nigerian/419 spammer harvested this address (was: Parallel Common-Lisp with at least 64 processors?)
Date: 
Message-ID: <rem-2008jun11-001@yahoo.com>
> From: ···················@SpamGourmet.Com (Robert Maas, http://tinyurl.com/uh3t)

I posted several articles under this address, but one of them got
harvested by a Nigerian spammer, who then sent me two spam to the
same address before I discovered it. As a result, I've shut down
this address so that I won't get any more spam via this address.
Any e-mail sent to this address will be accepted by the server then
just discarded without any non-delivery notice. That's why I'm
posting this warning, just in case somebody saw any of my articles
just now and wants to reply to me privately If anybody wants to
send private e-mail to me regarding anything I've posted, you'll
have to look around to find some other variant address that I
haven't yet disabled, or go to my Web site and click on "Contact
me".

As of a few days ago, Yahoo! Mail no longer provides any way to see
full headers of spam that comes in, so that's why the info isn't
included here. Previous recent Nigerian 419 spam of the same type
came from an IP number owned by Egyptian University, with a dropbox
at HotMail. Obviously neither Egypt nor MicroSoft is doing anything
to stop such spam.

Today in fact I received six spam, using four newly-harvested
SpamGourmet forwarding addresses, all of which I've now shut down:

·················@spamgourmet.com
···················@spamgourmet.com
···············@spamgourmet.com
···············@spamgourmet.com
···················@spamgourmet.com
···················@spamgourmet.com


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/----------------------------------------------------------------------------\
|   ~|~|_  _|_|       _    | _|  ._  _ _|_  |o __|_ _._    _|_|_  _|_|)._ _  |
|    | | |}_ _|  \/\/(_)|_||(_|  | |(_) |   ||_\ | }_| |)   | | |}_ _| | }_  |
|   ._  _ _|_  |o __|_ _._ o._ (~|   __|_o||   |) _._|_  _ |) _  _|_|_  _|_| |
|   | |(_) |   ||_\ | }_| ||| | _|  _\ | |||)  | }_| | |(_|| _\   | | |}_ _| |
|   ._  _   _._      o||                                                     |
|   | |}_\/}_|   \/\/|||o                                                    |
\----(Rendered by means of <http://www.schnoggo.com/figlet.html>)------------/
     (You don't need JavaScript or images to see that ASCII-text image!!
      You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (50-150 chars) into this TextArea:
   +------------------------------------------------------------+
   |                                                            |
   |                                                            |
   |                                                            |
   |                                                            |
   +------------------------------------------------------------+
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.uaadp8mqut4oq5@pandora.alfanett.no>
P� Sun, 27 Apr 2008 15:19:10 +0200, skrev Tim Bradshaw  
<··········@tfeb.org>:

> On Apr 26, 2:21 am, ·················@SpamGourmet.Com (Robert Maas,
> http://tinyurl.com/uh3t) wrote:
>> Does anybody you know of have access to a computer with at least 64
>> CPUs, with a version of Common Lisp that runs on that computer and
>> supports distributing the function-applications within a MAPCAR
>> call across as many CPUs as are available in order to achieve great
>> speed-up compared to the usual algorithm of performing each
>> function-application in sequence down the list? Would anybody
>> volunteer such a system for me to occasionally use across the net
>> without charge, for research purposes?
>
> Those systems typically don't exist any more, because the memory
> coherency & latency costs are too high for distributing stuff in such
> a fine-grained way.  There are plenty of machines with lots of cores -
> the systems I deal with mostly have up to 144, though we very seldom
> use them in a single domain (and no, they can't be lent out).  But
> these systems typically make use of significantly coarser-grained
> multiprocessing, so that you're not just killed by communication cost
> all the time.
>
> One interesting possibility though is heavily multicored/multithreaded
> processors such as Sun's Niagara family.  I don't know what the
> latency issues are between cores for these things, but between HW
> threads (which look like virtual CPUs) it is essentially zero I
> think.  I think the current largest of these systems is currently 16
> core / 128 thread.
>
> I doubt there are CL implementations which take advantage of these
> systems however.

"The Scieneer implementation of the Common Lisp language has been  
developed to support enterprise and performance computing applications,  
and features innovative multi-threading support for symmetrical  
multi-processor systems which clearly differentiates it from the  
competition."

http://www.scieneer.com/scl/index.html

It does not support vectorization if that is what you mean.

--------------
John Thingstad
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008may01-003@yahoo.com>
> From: "John Thingstad" <·······@online.no>
> "The Scieneer implementation of the Common Lisp language has been
> developed to support enterprise and performance computing
> applications, and features innovative multi-threading support for
> symmetrical multi-processor systems which clearly differentiates it
> from the competition."
> http://www.scieneer.com/scl/index.html

Hmm, it looks pretty nice. Does anyone here have direct experience
with it sufficient to rate it as to how well it actually works in
practice?

Browsing links from there:

   Linkname: UFFI: Universal Foreign Function Interface for Common Lisp
        URL: http://uffi.b9.com/
   ... Every Common Lisp implementation has a method
   for interfacing to such libraries. Unfortunately, these method vary
   widely amongst implementations.                               ^s
(typo, anybody here have authorization to fix the typo?)

   ... UFFI wraps this common subset of functionality with
   it's own syntax ...
     x (another typo)

> It does not support vectorization if that is what you mean.

<http://en.wikipedia.org/wiki/Vectorization>
   Vectorization, in computer science, is the process of converting a
   computer program from a scalar implementation, which does an operation
   on a pair of operands at a time, to a vectorized program where a
   single instruction can perform multiple operations or a pair of vector
   (series of adjacent values) operands. Vector processing is a major
   feature of both conventional and modern supercomputers.
OK, clarification: When I use the term, I'm not referring to the
*automatic* conversion from a conventional program to a vectorized
version. I'm merely referring to the final result, a single CPU
instruction that processes a whole array of data with the same
function with memory access and actual computing overlapped as fast
as the internal busses in the CPU can accomodate. For example, a
single instruction might compute the pairwise difference of two
arrays writing the differences to a pre-allocated third array, and
a second instruction might compute the squares of those
differences, and a third instruction might compute the sum of those
squares, thereby computing the variance between two vectors in just
three machine instructions. A fourth, non-vectorized operation,
would compute the square root of that sum of squares of
differences, thereby computing the Cartesian distance between the
two original vectors.

But this is a completely separate topic from the multi-CPU question
I posed. For my current application, I have a set of records, each
of which is to be pre-processed in exactly the same way:

- Convert to list of words, all lower case.
- Convert each word to bigrams trigrams and tetragrams, separately,
   and accumulate those results separately for each list-of-words.
I imagine all of that to be runnable on parallel processes, hence
the 64k query. No vectorization happens there.

Next:
- Accumulate all those bigram trigram and tetragram statistics
   separately for the entire corpus, yielding three whole-corpus
   histograms.
That would be done on the main computer after getting the many
individual triples from the sub-processes.

Next:
- Divide each of the three single-record histograms for each record
   by the whole-corpus histrogram for that class among the three, to
   yield the three frequency-ratio histograms for each such record.
- Merge those three frequency-ratio histograms for the record into
   a single ratio histogram for that record.
- Normalize that merged ratio historgram to have Cartesian length 1.
- Compute the ProxHash, which is a 64-component vector, for the record.
I imagine all of that to be runnable on parallel processes, hence
the 64k query. No vectorization happens there.

Next:
- Perform various calculations of the difference between partial or
   full ProxHash vectors in the process of building a nearest-neighbor
   network.
This is where vectorization would happen, but *not* on distributed
computers, merely on one (1) moderately vectorized computer, able
to compute Cartesian distance between two vectors (up to 64
elements in each vector) in four machine instructions
(diff,square,sum,sqrt). The first three operations would be
standard vectorized opcodes, whereas the sqrt would be specially
micro-coded to run extremely quickly by finite Newton's method
pipelined directly in the CPU internal bus structure. (It may be
that some commercial CPUs already include a built-in
vendor-supplied SQRT opcode, in which case of course no additional
micro-coding would be needed.)

Of course the CPU I speak of for the vectorized calculation might
be an auxiliary "vector processor", or it might be functionality
built into a high-performance main CPU.
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.uahmm9t3ut4oq5@pandora.alfanett.no>
P� Thu, 01 May 2008 10:57:01 +0200, skrev Robert Maas,  
http://tinyurl.com/uh3t <·················@SpamGourmet.Com>:

> OK, clarification: When I use the term, I'm not referring to the
> *automatic* conversion from a conventional program to a vectorized
> version. I'm merely referring to the final result, a single CPU
> instruction that processes a whole array of data with the same
> function with memory access and actual computing overlapped as fast
> as the internal busses in the CPU can accomodate. For example, a
> single instruction might compute the pairwise difference of two
> arrays writing the differences to a pre-allocated third array, and
> a second instruction might compute the squares of those
> differences, and a third instruction might compute the sum of those
> squares, thereby computing the variance between two vectors in just
> three machine instructions. A fourth, non-vectorized operation,
> would compute the square root of that sum of squares of
> differences, thereby computing the Cartesian distance between the
> two original vectors.

Have you seen this paper on the MapReduce algorithm goole uses?
http://labs.google.com/papers/mapreduce.html
It might give you soem idea of how to roll your own.

--------------
John Thingstad
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008may08-002@yahoo.com>
> From: "John Thingstad" <·······@online.no>
> Have you seen this paper on the MapReduce algorithm goole [sic] uses?
> http://labs.google.com/papers/mapreduce.html

No, I never heard of it before. Taking a look now ...

    Users specify a map function that processes a key/value pair to generate
    a set of intermediate key/value pairs, and a reduce function that merges
    all intermediate values associated with the same intermediate key.

So if I understand it correctly, if I want to reduce all the data
to a single value, then I generate the intermediate keys all the
same, but using different intermediate keys allows several
independent data-reduction tasks to run in parallel to produce
multiple keyed outputs? So the first stage runs fully parallel per
the different input pairs, but then the second stage runs at a
lesser degree of parallelism just one per intermediate key?

   a typical MapReduce computation
   processes many terabytes of data on thousands of machines.

It doesn't sound like Google would let me play with it for free.

> It might give you soem idea of how to roll your own.

Without access to a cluster of machines, there's no way I can
develop *any* such software myself. And if I tried to clone
thousands of active processes on this shell account just to develop
the software, I'd get in bad trouble with the admin.

But at least in principle it sounds like a nifty mathematical
abstraction that indeed should have many possible uses, just as you
and Google's WebPage claim. So thanks for sharing the info.
From: ·············@gmail.com
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <cef6952a-8907-4369-8e2a-51cd4bb58574@59g2000hsb.googlegroups.com>
On May 8, 3:04 am, ···················@SpamGourmet.Com (Robert Maas,
http://tinyurl.com/uh3t) wrote:

> Without access to a cluster of machines, there's no way I can
> develop *any* such software myself. And if I tried to clone
> thousands of active processes on this shell account just to develop
> the software, I'd get in bad trouble with the admin.
>


Renting instances on Amazon Elastic Cloud?
From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <3ad6d576-0208-4d46-8957-d01db624cc7a@r66g2000hsg.googlegroups.com>
On May 8, 9:04 am, ···················@SpamGourmet.Com (Robert Maas,
http://tinyurl.com/uh3t) wrote:

> Without access to a cluster of machines, there's no way I can
> develop *any* such software myself. And if I tried to clone
> thousands of active processes on this shell account just to develop
> the software, I'd get in bad trouble with the admin.

Of course you can.  You'll just take a performance hit since you'll be
emulating a large number of cores on one.  You may also not notice
bugs related to concurrency.  There's a very long history of people
doing this.  Indeed, time on big machines is typically so scarce and
expensive that very few people develop code on them: that would be an
absurd waste of resources.

--tim
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <h2jj24l6sjtunra297e45hq41pjp2eltov@4ax.com>
On Tue, 13 May 2008 06:28:00 -0700 (PDT), Tim Bradshaw
<··········@tfeb.org> wrote:

>On May 8, 9:04 am, ···················@SpamGourmet.Com (Robert Maas,
>http://tinyurl.com/uh3t) wrote:
>
>> Without access to a cluster of machines, there's no way I can
>> develop *any* such software myself. And if I tried to clone
>> thousands of active processes on this shell account just to develop
>> the software, I'd get in bad trouble with the admin.
>
>Of course you can.  You'll just take a performance hit since you'll be
>emulating a large number of cores on one.  You may also not notice
>bugs related to concurrency.  There's a very long history of people
>doing this.  Indeed, time on big machines is typically so scarce and
>expensive that very few people develop code on them: that would be an
>absurd waste of resources.
>
>--tim

You can pretty well approximate a cluster on a multi-core machine.
Finding latency bugs will still be a problem though ... a debug
framework that inserts short random delays into message delivery can
help with that (don't know of a canned one offhand, but it's easy
enough to funnel messaging though a server thread that does it).

George
--
for email reply remove "/" from address
From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <91d348be-54b6-460f-824f-0cf69a59dd0f@a70g2000hsh.googlegroups.com>
On May 13, 6:19 pm, George Neuner <·········@/comcast.net> wrote:

> You can pretty well approximate a cluster on a multi-core machine.

You can approximate one on a single-core machine: that's the wonder of
computers.  Of course the performance characteristics will be a
different and you'll have to do some work to make sure bugs show up,
but it can be done, and anyone who is claiming that they can never
develop any code because they don't have access to time on very
expensive hardware is just trying to put obstacles in their own way to
avoid doing things.

Does no one remember the *Lisp simulateor?  I ran that on a Sun 3.
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <2mek24h4e9l6sv1sbrgce46s2n89uhskiq@4ax.com>
On Tue, 13 May 2008 14:44:38 -0700 (PDT), Tim Bradshaw
<··········@tfeb.org> wrote:

>On May 13, 6:19 pm, George Neuner <·········@/comcast.net> wrote:
>
>> You can pretty well approximate a cluster on a multi-core machine.
>
>You can approximate one on a single-core machine

Of course - been there, done that.  

But sans the aid of a simulator that can pretty faithfully reproduce
the behavior of a multiprocessor, it can be quite difficult to make
sure your code is clear of synchronization and latency bugs.  Actually
having a several processors (or cores) to run in parallel can speed up
tripping a hidden problem.

[Not that should be any hidden problems, but IME many developers have
considerable difficulty moving between the different abstraction
levels and seeing the application design at all levels.  Thus
debugging tends to be a much larger part of concurrent designs.]


>: that's the wonder of
>computers.  Of course the performance characteristics will be a
>different and you'll have to do some work to make sure bugs show up,
>but it can be done, and anyone who is claiming that they can never
>develop any code because they don't have access to time on very
>expensive hardware is just trying to put obstacles in their own way to
>avoid doing things.
>
>Does no one remember the *Lisp simulateor?  I ran that on a Sun 3.

Never used it.  And IIRC it wasn't a multiprocessor Lisp simulation.

George
--
for email reply remove "/" from address
From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <84bb9cf7-c502-43e3-8dfe-fb7fbe76ffeb@8g2000hse.googlegroups.com>
On May 14, 2:36 am, George Neuner <·········@/comcast.net> wrote:

>
> Of course - been there, done that.
>
> But sans the aid of a simulator that can pretty faithfully reproduce
> the behavior of a multiprocessor, it can be quite difficult to make
> sure your code is clear of synchronization and latency bugs.  Actually
> having a several processors (or cores) to run in parallel can speed up
> tripping a hidden problem.

I think we're agreeing here :-).  My real point was that there's no
reason to wait for time on expensive and perhaps not-yet-existing HW
to write code which will exploit it, and indeed almost all code for
such HW is written on simulators of one kind or another.

> Never used it.  And IIRC it wasn't a multiprocessor Lisp simulation.

Well, it gave you the constructs & operators which you had on the
Connection Machine (CM-2?) which was a fairly large multiprocessor,
albeit of a kind (SIMD) not seen much nowadays (though aren't a lot of
graphics cards essentially SIMD systems?)

--tim
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <adjn24l6mbmacp8onb3f14s48k9n2l7l1f@4ax.com>
On Wed, 14 May 2008 02:48:28 -0700 (PDT), Tim Bradshaw
<··········@tfeb.org> wrote:

>On May 14, 2:36 am, George Neuner <·········@/comcast.net> wrote:
>
>>On Tue, 13 May 2008 14:44:38 -0700 (PDT), Tim Bradshaw
>><··········@tfeb.org> wrote:
>
>>>Does no one remember the *Lisp simulator?
>
>> Never used it.  And IIRC it wasn't a multiprocessor Lisp simulation.
>
>Well, it gave you the constructs & operators which you had on the
>Connection Machine (CM-2?) which was a fairly large multiprocessor,
>albeit of a kind (SIMD) not seen much nowadays 

I remember the CM-2 well, though I never used *Lisp ... at the time
most of my programming was in C* with a bit in Paris (CM assembler).


>(aren't a lot of graphics cards essentially SIMD systems?)

Depends on your definitions I suppose.  A single instruction does
operate on multiple data elements ... 

But to my thinking, GPUs, SSE units on x86, etc. are really (short)
vector processors - all the data elements are offset from a common
address.

On the other hand, on the early CMs and other by-gone machines that I
think of as being "real" SIMD, each ALU had independent address
generation ... allowing, for example, each ALU to follow its own path
through a graph.

It's quite difficult to do the same kind of programming using vector
ops ... whether they really are technically "SIMD" or not, I don't
consider them to be in the same league.

George
--
for email reply remove "/" from address
From: Tim Bradshaw
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <fd376e6d-a38b-48ee-99b1-37a0d2ba9d79@l42g2000hsc.googlegroups.com>
On May 15, 7:37 am, George Neuner <·········@/comcast.net> wrote:

>
> But to my thinking, GPUs, SSE units on x86, etc. are really (short)
> vector processors - all the data elements are offset from a common
> address.
>

yes, that's a much better view of what they are, I think.
From: Vend
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <a5f076d4-5add-4d0c-97c0-6315695deaef@e39g2000hsf.googlegroups.com>
On 26 Apr, 03:21, ·················@SpamGourmet.Com (Robert Maas,
http://tinyurl.com/uh3t) wrote:
> Does anybody you know of have access to a computer with at least 64
> CPUs, with a version of Common Lisp that runs on that computer and
> supports distributing the function-applications within a MAPCAR
> call across as many CPUs as are available in order to achieve great
> speed-up compared to the usual algorithm of performing each
> function-application in sequence down the list? Would anybody
> volunteer such a system for me to occasionally use across the net
> without charge, for research purposes?
>
> Of course if there are dependencies from one function application
> to the next, this parallel-mapcar wouldn't be appropriate. But I
> have an application where I need to apply a single function to a
> large number of arguments in parallel. I'm running it with nearly
> three hundred at the moment, whereupon it takes several minutes to
> do them all in succession, which isn't too bad if done rarely, but
> I envision doing the same with thousands of arguments, whereby the
> time to do them in succession would be prohibitive.

Excuse my ignorant remark, but can't you just spawn 64 threads and let
the OS scheduler balance them?
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.ucf4l4ukut4oq5@pandora.alfanett.no>
P� Sun, 08 Jun 2008 21:25:08 +0200, skrev Vend <······@virgilio.it>:

> On 26 Apr, 03:21, ·················@SpamGourmet.Com (Robert Maas,
> http://tinyurl.com/uh3t) wrote:
>> Does anybody you know of have access to a computer with at least 64
>> CPUs, with a version of Common Lisp that runs on that computer and
>> supports distributing the function-applications within a MAPCAR
>> call across as many CPUs as are available in order to achieve great
>> speed-up compared to the usual algorithm of performing each
>> function-application in sequence down the list? Would anybody
>> volunteer such a system for me to occasionally use across the net
>> without charge, for research purposes?
>>
>> Of course if there are dependencies from one function application
>> to the next, this parallel-mapcar wouldn't be appropriate. But I
>> have an application where I need to apply a single function to a
>> large number of arguments in parallel. I'm running it with nearly
>> three hundred at the moment, whereupon it takes several minutes to
>> do them all in succession, which isn't too bad if done rarely, but
>> I envision doing the same with thousands of arguments, whereby the
>> time to do them in succession would be prohibitive.
>
> Excuse my ignorant remark, but can't you just spawn 64 threads and let
> the OS scheduler balance them?

Not on most schedulers today.
The reason is the garbage collector.
LispWorks and ACL lock you to one processor for all processes.
Spinlocks have different properties from semaphores.

--------------
John Thingstad
From: Vend
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <c61a7afa-daf1-48a7-9a87-b1c2aa059030@y38g2000hsy.googlegroups.com>
On 8 Giu, 22:02, "John Thingstad" <·······@online.no> wrote:
> På Sun, 08 Jun 2008 21:25:08 +0200, skrev Vend <······@virgilio.it>:
>
>
>
> > On 26 Apr, 03:21, ·················@SpamGourmet.Com (Robert Maas,
> >http://tinyurl.com/uh3t) wrote:
> >> Does anybody you know of have access to a computer with at least 64
> >> CPUs, with a version of Common Lisp that runs on that computer and
> >> supports distributing the function-applications within a MAPCAR
> >> call across as many CPUs as are available in order to achieve great
> >> speed-up compared to the usual algorithm of performing each
> >> function-application in sequence down the list? Would anybody
> >> volunteer such a system for me to occasionally use across the net
> >> without charge, for research purposes?
>
> >> Of course if there are dependencies from one function application
> >> to the next, this parallel-mapcar wouldn't be appropriate. But I
> >> have an application where I need to apply a single function to a
> >> large number of arguments in parallel. I'm running it with nearly
> >> three hundred at the moment, whereupon it takes several minutes to
> >> do them all in succession, which isn't too bad if done rarely, but
> >> I envision doing the same with thousands of arguments, whereby the
> >> time to do them in succession would be prohibitive.
>
> > Excuse my ignorant remark, but can't you just spawn 64 threads and let
> > the OS scheduler balance them?
>
> Not on most schedulers today.
> The reason is the garbage collector.
> LispWorks and ACL lock you to one processor for all processes.
> Spinlocks have different properties from semaphores.

If I understand correctly, the Sun Java VM has a multithreading/
multiprocessor garbage collector, hasn't it?
Would it be more difficult to implement one for Common Lisp?
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <7t6t44l1u9ebhs0m7vuj22gag2t4fu09j7@4ax.com>
On Sun, 8 Jun 2008 14:51:37 -0700 (PDT), Vend <······@virgilio.it>
wrote:

>On 8 Giu, 22:02, "John Thingstad" <·······@online.no> wrote:
>> P� Sun, 08 Jun 2008 21:25:08 +0200, skrev Vend <······@virgilio.it>:
>>
>>
>>
>> > On 26 Apr, 03:21, ·················@SpamGourmet.Com (Robert Maas,
>> >http://tinyurl.com/uh3t) wrote:
>> >> Does anybody you know of have access to a computer with at least 64
>> >> CPUs, with a version of Common Lisp that runs on that computer and
>> >> supports distributing the function-applications within a MAPCAR
>> >> call across as many CPUs as are available in order to achieve great
>> >> speed-up compared to the usual algorithm of performing each
>> >> function-application in sequence down the list? Would anybody
>> >> volunteer such a system for me to occasionally use across the net
>> >> without charge, for research purposes?
>>
>> >> Of course if there are dependencies from one function application
>> >> to the next, this parallel-mapcar wouldn't be appropriate. But I
>> >> have an application where I need to apply a single function to a
>> >> large number of arguments in parallel. I'm running it with nearly
>> >> three hundred at the moment, whereupon it takes several minutes to
>> >> do them all in succession, which isn't too bad if done rarely, but
>> >> I envision doing the same with thousands of arguments, whereby the
>> >> time to do them in succession would be prohibitive.
>>
>> > Excuse my ignorant remark, but can't you just spawn 64 threads and let
>> > the OS scheduler balance them?
>>
>> Not on most schedulers today.
>> The reason is the garbage collector.
>> LispWorks and ACL lock you to one processor for all processes.
>> Spinlocks have different properties from semaphores.
>
>If I understand correctly, the Sun Java VM has a multithreading/
>multiprocessor garbage collector, hasn't it?
>Would it be more difficult to implement one for Common Lisp?

It could be a bit easier depending on whether you insist on finalizers
(a dumb idea anyway IMO).  Since Lisp code generally employs scoped
handling for resources other than memory, finalizers aren't really
necessary.

George
--
for email reply remove "/" from address
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008jun09-005@yahoo.com>
> From: Vend <······@virgilio.it>
> Excuse my ignorant remark, but can't you just spawn 64 threads
> and let the OS scheduler balance them?

I suppose I could do that *once*, and immediately have my account terminated.

Once a few years ago I had a function that created a new process,
inside an UNWIND-PROTECT to make sure it got killed if the inner
code aborted. But I made a mistake in editing the function and the
process no longer got killed when an abort happened. I was trying
to debug why it was failing, unaware that I was creating a new
process and not killing it with each re-testing. Suddenly my login
session got killed by the admin and when I dialed back in I found a
nasty message from the admin complaining about my appx. 30
processes that were hogging system resources. I would never want to
do anything like that *deliberately*, as you propose.

The whole point of distributing my tasks over 64 processors is to
try to get it to run 64 times as fast as if running on a single
processor, to allow a couple orders of magnitude more data to be
processed within a reasonable time. If I spawned 64 processes on a
single CPU here on this shell account, it wouldn't run any faster,
would probably run slower, and would piss off the admin royally if
he learned I did it deliberately after already being warned not to
do anything like that after it happened accidently years ago.

By the way, I notice you're in Italy, right? I get a lot of spam
from Italy, but it's mostly from that other ISP, not yours.

-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/--------------------------------------------------------------------\
|  |\/| _ | _    ~|~'   _   |_  _  _|   _ _  _ |   |.   _    _ _     |
|  |  |(_||(/_.  _|_ \/(/_  | |(_|(_|  | (/_(_||,  ||\/(/_  _\(/_><. |
\----(Rendered by means of <http://www.schnoggo.com/figlet.html>)----/
  (You don't need JavaScript or images to see that ASCII-text image!!
   You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (20-40 chars) into this TextField:
          +----------------------------------------+
          |                                        |
          +----------------------------------------+
From: Vend
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <04b8869a-c659-41a5-8d62-679555b6ed87@i76g2000hsf.googlegroups.com>
On 9 Giu, 23:16, ··················@spamgourmet.com.remove (Robert
Maas, http://tinyurl.com/uh3t) wrote:
> > From: Vend <······@virgilio.it>
> > Excuse my ignorant remark, but can't you just spawn 64 threads
> > and let the OS scheduler balance them?
>
> I suppose I could do that *once*, and immediately have my account terminated.
>
> Once a few years ago I had a function that created a new process,
> inside an UNWIND-PROTECT to make sure it got killed if the inner
> code aborted. But I made a mistake in editing the function and the
> process no longer got killed when an abort happened. I was trying
> to debug why it was failing, unaware that I was creating a new
> process and not killing it with each re-testing. Suddenly my login
> session got killed by the admin and when I dialed back in I found a
> nasty message from the admin complaining about my appx. 30
> processes that were hogging system resources. I would never want to
> do anything like that *deliberately*, as you propose.
>
> The whole point of distributing my tasks over 64 processors is to
> try to get it to run 64 times as fast as if running on a single
> processor, to allow a couple orders of magnitude more data to be
> processed within a reasonable time. If I spawned 64 processes on a
> single CPU here on this shell account, it wouldn't run any faster,
> would probably run slower, and would piss off the admin royally if
> he learned I did it deliberately after already being warned not to
> do anything like that after it happened accidently years ago.

I assumed you were working on a SMP machine with all the CPUs managed
by a single OS.

> By the way, I notice you're in Italy, right? I get a lot of spam
> from Italy, but it's mostly from that other ISP, not yours.

So?

> -
> Nobody in their right mind likes spammers, nor their automated assistants.
> To open an account here, you must demonstrate you're not one of them.
> Please spend a few seconds to try to read the text-picture in this box:
>
> /--------------------------------------------------------------------\
> |  |\/| _ | _    ~|~'   _   |_  _  _|   _ _  _ |   |.   _    _ _     |
> |  |  |(_||(/_.  _|_ \/(/_  | |(_|(_|  | (/_(_||,  ||\/(/_  _\(/_><. |
> \----(Rendered by means of <http://www.schnoggo.com/figlet.html>)----/
>   (You don't need JavaScript or images to see that ASCII-text image!!
>    You just need to view this in a fixed-pitch font such as Monaco.)
>
> Then enter your best guess of the text (20-40 chars) into this TextField:
>           +----------------------------------------+
>           |                                        |
>           +----------------------------------------+
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008jun30-006@yahoo.com>
> Date: Tue, 10 Jun 2008 04:00:01 -0700 (PDT)
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: Vend <······@virgilio.it>
> I assumed you were working on a SMP machine with all the CPUs managed
> by a single OS.

I have no idea how to find out that information.
From: Tim X
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <87abhtr9xf.fsf@lion.rapttech.com.au>
··················@spamgourmet.com.remove (Robert Maas,
http://tinyurl.com/uh3t) writes:

>
> By the way, I notice you're in Italy, right? I get a lot of spam
> from Italy, but it's mostly from that other ISP, not yours.
>

If you put your address out there, you will get a lot of spam. use an
ISp with a decent anti-spam solution and your life will be easier. 

> -
> Nobody in their right mind likes spammers, nor their automated assistants.
> To open an account here, you must demonstrate you're not one of them.
> Please spend a few seconds to try to read the text-picture in this box:
>
> /--------------------------------------------------------------------\
> |  |\/| _ | _    ~|~'   _   |_  _  _|   _ _  _ |   |.   _    _ _     |
> |  |  |(_||(/_.  _|_ \/(/_  | |(_|(_|  | (/_(_||,  ||\/(/_  _\(/_><. |
> \----(Rendered by means of <http://www.schnoggo.com/figlet.html>)----/
>   (You don't need JavaScript or images to see that ASCII-text image!!
>    You just need to view this in a fixed-pitch font such as Monaco.)
>
> Then enter your best guess of the text (20-40 chars) into this TextField:

yeah, and what about blind users? 

-- 
tcross (at) rapttech dot com dot au
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <P_-dnV26qKvSL8bVnZ2dnUVZ8hydnZ2d@posted.plusnet>
Robert Maas, http://tinyurl.com/uh3t wrote:
> The whole point of distributing my tasks over 64 processors is to
> try to get it to run 64 times as fast as if running on a single
> processor, to allow a couple orders of magnitude more data to be
> processed within a reasonable time.

If you want your programs to run at a reasonable speed you should not be
using Lisp in the first place. So your best bet is probably to rewrite your
code in a modern language and benefit from their enormous performance
improvements.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Ariel
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <20080620081855.458c0980.no@mail.poo>
On Fri, 20 Jun 2008 14:32:50 +0100
Jon Harrop <···@ffconsultancy.com> wrote:

> Robert Maas, http://tinyurl.com/uh3t wrote:
> > The whole point of distributing my tasks over 64 processors is to
> > try to get it to run 64 times as fast as if running on a single
> > processor, to allow a couple orders of magnitude more data to be
> > processed within a reasonable time.
> 
> If you want your programs to run at a reasonable speed you should not be
> using Lisp in the first place. So your best bet is probably to rewrite your
> code in a modern language and benefit from their enormous performance
> improvements.

I thought using a modern compiler for Lisp would allow it to perform as well as any other standard modern day high level language?  (Or so says Paul Graham.)  Is this a false statement?
From: Edi Weitz
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <uskv8hzq0.fsf@agharta.de>
On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:

> I thought using a modern compiler for Lisp would allow it to perform
> as well as any other standard modern day high level language?  (Or
> so says Paul Graham.)  Is this a false statement?

The statement is correct.  The person who posted the mis-information
you replied to is a well-known troll who tries everything (including
blatant lies) to sell his books and magazines.  Search the archives of
this newsgroup for other postings of him.

Edi.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Rainer Joswig
Subject: Lucius Detritus
Date: 
Message-ID: <joswig-593B8B.18425220062008@news-europe.giganews.com>
In article <·············@agharta.de>, Edi Weitz <········@agharta.de> 
wrote:

> On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:
> 
> > I thought using a modern compiler for Lisp would allow it to perform
> > as well as any other standard modern day high level language?  (Or
> > so says Paul Graham.)  Is this a false statement?
> 
> The statement is correct.  The person who posted the mis-information
> you replied to is a well-known troll who tries everything (including
> blatant lies) to sell his books and magazines.  Search the archives of
> this newsgroup for other postings of him.
> 
> Edi.

He (They?) reminds me of Lucius Detritus, if you know who
I mean (Germans might know him as Tullius Destruktivus). ;-)
Remember the green speech bubbles?

-- 
http://lispm.dyndns.org/
From: Ariel
Subject: Re: Lucius Detritus
Date: 
Message-ID: <20080620103242.c5d695b6.no@mail.poo>
On Fri, 20 Jun 2008 18:42:53 +0200
Rainer Joswig <······@lisp.de> wrote:

> In article <·············@agharta.de>, Edi Weitz <········@agharta.de> 
> wrote:
> 
> > On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:
> > 
> > > I thought using a modern compiler for Lisp would allow it to perform
> > > as well as any other standard modern day high level language?  (Or
> > > so says Paul Graham.)  Is this a false statement?
> > 
> > The statement is correct.  The person who posted the mis-information
> > you replied to is a well-known troll who tries everything (including
> > blatant lies) to sell his books and magazines.  Search the archives of
> > this newsgroup for other postings of him.
> > 
> > Edi.
> 
> He (They?) reminds me of Lucius Detritus, if you know who
> I mean (Germans might know him as Tullius Destruktivus). ;-)
> Remember the green speech bubbles?
> 
> -- 
> http://lispm.dyndns.org/

I used to love reading Asterix :)
From: Rainer Joswig
Subject: Re: Lucius Detritus
Date: 
Message-ID: <joswig-21514E.19434620062008@news-europe.giganews.com>
In article <··························@mail.poo>, Ariel <··@mail.poo> 
wrote:

> On Fri, 20 Jun 2008 18:42:53 +0200
> Rainer Joswig <······@lisp.de> wrote:
> 
> > In article <·············@agharta.de>, Edi Weitz <········@agharta.de> 
> > wrote:
> > 
> > > On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:
> > > 
> > > > I thought using a modern compiler for Lisp would allow it to perform
> > > > as well as any other standard modern day high level language?  (Or
> > > > so says Paul Graham.)  Is this a false statement?
> > > 
> > > The statement is correct.  The person who posted the mis-information
> > > you replied to is a well-known troll who tries everything (including
> > > blatant lies) to sell his books and magazines.  Search the archives of
> > > this newsgroup for other postings of him.
> > > 
> > > Edi.
> > 
> > He (They?) reminds me of Lucius Detritus, if you know who
> > I mean (Germans might know him as Tullius Destruktivus). ;-)
> > Remember the green speech bubbles?
> > 
> > -- 
> > http://lispm.dyndns.org/
> 
> I used to love reading Asterix :)

I'd say there are some parallels between Asterix and co. and
the Lisp users. ;-)

-- 
http://lispm.dyndns.org/
From: Edi Weitz
Subject: Re: Lucius Detritus
Date: 
Message-ID: <uej6shsjz.fsf@agharta.de>
On Fri, 20 Jun 2008 18:42:53 +0200, Rainer Joswig <······@lisp.de> wrote:

> He (They?) reminds me of Lucius Detritus, if you know who I mean
> (Germans might know him as Tullius Destruktivus). ;-) Remember the
> green speech bubbles?

Hehe.  I sometimes had to think of Lucius Detritus when I read
gavino's postings.  He wrote one sentence and then disappeared, but it
was usually enough to keep half of c.l.l busy for a week... :)

Edi.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Raymond Wiker
Subject: Re: Lucius Detritus
Date: 
Message-ID: <m27iciog2j.fsf@RAWMBP.local>
Rainer Joswig <······@lisp.de> writes:

> In article <·············@agharta.de>, Edi Weitz <········@agharta.de> 
> wrote:
>
>> On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:
>> 
>> > I thought using a modern compiler for Lisp would allow it to perform
>> > as well as any other standard modern day high level language?  (Or
>> > so says Paul Graham.)  Is this a false statement?
>> 
>> The statement is correct.  The person who posted the mis-information
>> you replied to is a well-known troll who tries everything (including
>> blatant lies) to sell his books and magazines.  Search the archives of
>> this newsgroup for other postings of him.
>> 
>> Edi.
>
> He (They?) reminds me of Lucius Detritus, if you know who
> I mean (Germans might know him as Tullius Destruktivus). ;-)
> Remember the green speech bubbles?

	Terry Pratchett also has a character named  "Detritus"; his
version is a troll who uses a helmet with a clockwork fan to boost his
mental capacity. Could be that Harrop could use one of those, too.
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <uO-dnTK9OMfCW8HVnZ2dnUVZ8tPinZ2d@posted.plusnet>
Edi Weitz wrote:
> On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:
>> I thought using a modern compiler for Lisp would allow it to perform
>> as well as any other standard modern day high level language?  (Or
>> so says Paul Graham.)  Is this a false statement?
> 
> The statement is correct.  The person who posted the mis-information
> you replied to is a well-known troll who tries everything (including
> blatant lies) to sell his books and magazines. Search the archives of this
> newsgroup for other postings of him. 

Note the breadth of programming language experience offered by Edi Weitz:

  http://www.weitz.de

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Raymond Wiker
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <m23an6ofyq.fsf@RAWMBP.local>
Jon Harrop <···@ffconsultancy.com> writes:

> Edi Weitz wrote:
>> On Fri, 20 Jun 2008 08:18:55 -0700, Ariel <··@mail.poo> wrote:
>>> I thought using a modern compiler for Lisp would allow it to perform
>>> as well as any other standard modern day high level language?  (Or
>>> so says Paul Graham.)  Is this a false statement?
>> 
>> The statement is correct.  The person who posted the mis-information
>> you replied to is a well-known troll who tries everything (including
>> blatant lies) to sell his books and magazines. Search the archives of this
>> newsgroup for other postings of him. 
>
> Note the breadth of programming language experience offered by Edi Weitz:
>
>   http://www.weitz.de

	His web pages shows that he is a more than capable Lisp
programmer, which means that he is worth listening to in a Lisp
newsgroup. You, on the other hand, have shown yourself worth
ignoring, in Lisp newsgroups as well as a number of other places. 
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <8rydnV60L_Cgs8PVnZ2dneKdnZydnZ2d@posted.plusnet>
Raymond Wiker wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Note the breadth of programming language experience offered by Edi Weitz:
>>
>>   http://www.weitz.de
> 
> His web pages shows that he is a more than capable Lisp
> programmer, which means that he is worth listening to in a Lisp
> newsgroup.

Edi was commenting on comparisons of Lisp with other languages that he
appears to have no experience of whatsoever.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Pascal Costanza
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <6c7d39F3ema54U2@mid.individual.net>
Jon Harrop wrote:
> Raymond Wiker wrote:
>> Jon Harrop <···@ffconsultancy.com> writes:
>>> Note the breadth of programming language experience offered by Edi Weitz:
>>>
>>>   http://www.weitz.de
>> His web pages shows that he is a more than capable Lisp
>> programmer, which means that he is worth listening to in a Lisp
>> newsgroup.
> 
> Edi was commenting on comparisons of Lisp with other languages that he
> appears to have no experience of whatsoever.

Edi doesn't need experience in several programming languages to point 
out that you're wrong about Lisp.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <aZedncmi9fcDD8PVnZ2dneKdnZydnZ2d@posted.plusnet>
Pascal Costanza wrote:
> Jon Harrop wrote:
>> Edi was commenting on comparisons of Lisp with other languages that he
>> appears to have no experience of whatsoever.
> 
> Edi doesn't need experience in several programming languages to point
> out that you're wrong about Lisp.

Then why are all of the Lisp implementations slower and longer on all of
these tests?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <2h6t5456l3bq5ml15vki31193lu1ef3v64@4ax.com>
On Sun, 22 Jun 2008 18:54:40 +0100, Jon Harrop <···@ffconsultancy.com>
wrote:

>Pascal Costanza wrote:
>> Jon Harrop wrote:
>>> Edi was commenting on comparisons of Lisp with other languages that he
>>> appears to have no experience of whatsoever.
>> 
>> Edi doesn't need experience in several programming languages to point
>> out that you're wrong about Lisp.
>
>Then why are all of the Lisp implementations slower and longer on all of
>these tests?

The source is longer because ((O)Ca)ML/F# syntax has some innate
brevity advantages wrt Lisp and because Lisp does still require some
type declarations to get maximum speed.

IMO source brevity is not an overwhelming advantage.  I know about the
studies that show programmers write about the same number of lines
regardless of language and so denser coding languages get more bang
for buck.  But that argument ignores long term maintenance issues -
code can get too dense to read and understand easily.  If everyone
believed brevity was a virtue, we'd all be using APL.

As for performance, I haven't looked at all the programs you've
"tested", but it's a good bet they were ports of your Ocaml solutions
rather than being written specifically for Lisp.

George
--
for email reply remove "/" from address
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <_s2dnSOra-uyKMPVnZ2dnUVZ8qDinZ2d@posted.plusnet>
George Neuner wrote:
> On Sun, 22 Jun 2008 18:54:40 +0100, Jon Harrop <···@ffconsultancy.com>
> wrote:
>>Then why are all of the Lisp implementations slower and longer on all of
>>these tests?
> 
> The source is longer because ((O)Ca)ML/F# syntax has some innate
> brevity advantages wrt Lisp and because Lisp does still require some
> type declarations to get maximum speed.
> 
> IMO source brevity is not an overwhelming advantage.  I know about the
> studies that show programmers write about the same number of lines
> regardless of language and so denser coding languages get more bang
> for buck.  But that argument ignores long term maintenance issues -
> code can get too dense to read and understand easily.

I am not actually sure if that will ever be true but I certainly don't
believe that it is true in this case. In fact, if you look at the OCaml:

let rec ( +: ) f g = match f, g with 
   | `Int n, `Int m -> `Int (n +/ m) 
   | `Int (Int 0), e | e, `Int (Int 0) -> e 
   | f, `Add(g, h) -> f +: g +: h 
   | f, g -> `Add(f, g) 
let rec ( *: ) f g = match f, g with 
   | `Int n, `Int m -> `Int (n */ m) 
   | `Int (Int 0), e | e, `Int (Int 0) -> `Int (Int 0) 
   | `Int (Int 1), e | e, `Int (Int 1) -> e 
   | f, `Mul(g, h) -> f *: g *: h 
   | f, g -> `Mul(f, g) 
let rec simplify = function 
   | `Int _ | `Var _ as f -> f 
   | `Add (f, g) -> simplify f +: simplify g 
   | `Mul (f, g) -> simplify f *: simplify g

it still leaves a lot to be desired. Referring to an arbitrary-precision
zero as "Int 0" is needless verbosity (F# uses 0N). Having to invent "+:"
and "*:" because OCaml lacks operator overloading is unnecessary. Even
having to implement a custom rewriter for an algebraic datatype might be
considered needless.

Consider a Mathematica-like syntax:

  {n_Integer + m_Integer -> n + m,
   (0 + e_ | e_ + 0) -> e,
   e_ + (f_ + g_) -> e + f + g,
   n_Integer m_Integer -> n m,
   (0 e_ | e_ 0) -> 0,
   (1 e_ | e_ 1) -> e,
   e_ (f_ g_) -> e f g}

> If everyone believed brevity was a virtue, we'd all be using APL.

If everyone believed brevity was the *only* virtue, yes. I am actually
interested in APL and often use Mathematica's very concise domain-specific
notations. So I'm not afraid of extreme brevity.

> As for performance, I haven't looked at all the programs you've
> "tested", but it's a good bet they were ports of your Ocaml solutions
> rather than being written specifically for Lisp.

The original source for the symbolic simplifier was actually in Mathematica
and for the ray tracer it was C++. There was also a compiler for a
Lisp-like DSL that was written in Lisp and even that became substantially
shorter and just as fast when I ported it from Lisp to OCaml. So I don't
think there is an unfair bias afflicting these results.

There are some syntactic differences (e.g. currying) and some expressiveness
(e.g. pattern matching). Nothing insurmountable but the Lisp community
really need to work to bring Lisp up to date, IMHO.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Pascal Costanza
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <6c7lamF3fb098U1@mid.individual.net>
Jon Harrop wrote:
> Pascal Costanza wrote:
>> Jon Harrop wrote:
>>> Edi was commenting on comparisons of Lisp with other languages that he
>>> appears to have no experience of whatsoever.
>> Edi doesn't need experience in several programming languages to point
>> out that you're wrong about Lisp.
> 
> Then why are all of the Lisp implementations slower and longer on all of
> these tests?

LOL

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <kMGdnSwTlOs_jMHVnZ2dnUVZ8vGdnZ2d@posted.plusnet>
Ariel wrote:
> On Fri, 20 Jun 2008 14:32:50 +0100
> Jon Harrop <···@ffconsultancy.com> wrote:
>> Robert Maas, http://tinyurl.com/uh3t wrote:
>> > The whole point of distributing my tasks over 64 processors is to
>> > try to get it to run 64 times as fast as if running on a single
>> > processor, to allow a couple orders of magnitude more data to be
>> > processed within a reasonable time.
>> 
>> If you want your programs to run at a reasonable speed you should not be
>> using Lisp in the first place. So your best bet is probably to rewrite
>> your code in a modern language and benefit from their enormous
>> performance improvements.
> 
> I thought using a modern compiler for Lisp would allow it to perform as
> well as any other standard modern day high level language?  (Or so says
> Paul Graham.)  Is this a false statement?

That certainly is a false statement, yes. Even for trivial tasks, it can be
vastly more difficult to write comparably efficient Lisp code. For example,
this toy symbolic simplifier:

  http://www.lambdassociates.org/studies/study10.htm

The OCaml code is not particularly efficient and OCaml itself is not
particularly well-suited to this benchmark (MLton-compiled SML would be a
lot faster):

let rec ( +: ) f g = match f, g with 
   | `Int n, `Int m -> `Int (n +/ m) 
   | `Int (Int 0), e | e, `Int (Int 0) -> e 
   | f, `Add(g, h) -> f +: g +: h 
   | f, g -> `Add(f, g) 
let rec ( *: ) f g = match f, g with 
   | `Int n, `Int m -> `Int (n */ m) 
   | `Int (Int 0), e | e, `Int (Int 0) -> `Int (Int 0) 
   | `Int (Int 1), e | e, `Int (Int 1) -> e 
   | f, `Mul(g, h) -> f *: g *: h 
   | f, g -> `Mul(f, g) 
let rec simplify = function 
   | `Int _ | `Var _ as f -> f 
   | `Add (f, g) -> simplify f +: simplify g 
   | `Mul (f, g) -> simplify f *: simplify g

The obvious Lisp is 7.5x slower than the unoptimized OCaml:
 
(defun simplify (a) 
    (if (atom a) 
        a 
        (destructuring-bind (op x y) a 
         (let* ((f (simplify x)) 
                (g (simplify y)) 
                (nf (numberp f)) 
                (ng (numberp g)) 
                (+? (eq '+ op)) 
                (*? (eq '* op))) 
           (cond 
             ((and +? nf ng)                   (+ f g)) 
             ((and +? nf (zerop f))            g) 
             ((and +? ng (zerop g))            f) 
             ((and (listp g) (eq op (first g))) 
              (destructuring-bind (op2 u v) g 
                (simplify `(,op (,op ,f ,u) ,v)))) 
             ((and *? nf ng)                   (* f g)) 
             ((and *? (or (and nf (zerop f)) 
                          (and ng (zerop g)))) 0) 
             ((and *? nf (= 1 f))              g) 
             ((and *? ng (= 1 g))              f) 
             (t                                `(,op ,f ,g)))))))

If you openly advertise this as a programming challenge and get dozens of
expert Lisp programmers to painstakingly optimize solutions to this trivial
problem over a period of many weeks then the fastest implementation you get
is still 1.7x slower than the unoptimized OCaml but, more importantly, it
is a completely unmaintainable mess:
 
(defun simplify-no-redundant-checks (xexpr) 
     (if (atom xexpr) 
       xexpr 
       (let ((op (first xexpr)) 
             (z (second xexpr)) 
             (y (third xexpr))) 
         (let* ((f (simplify-no-redundant-checks z)) 
                (g (simplify-no-redundant-checks y)) 
                (nf (numberp f)) 
                (ng (numberp g))) 
           (tagbody 
            START 
              (if (eq '+ op) (go OPTIMIZE-PLUS) (go TEST-MULTIPLY)) 
            OPTIMIZE-PLUS 
              (when (and nf ng) (return-from simplify-no-redundant-checks (+
f g))) 
            TEST-PLUS-ZEROS 
              (when (eql f 0) (return-from simplify-no-redundant-checks g)) 
              (when (eql g 0) (return-from simplify-no-redundant-checks f)) 
              (go REARRANGE-EXPR) 
            TEST-MULTIPLY 
              (unless (eq '* op) (go REARRANGE-EXPR)) 
            OPTIMIZE-MULTIPLY 
              (when (and nf ng) (return-from simplify-no-redundant-checks (*
f g))) 
            TEST-MULTIPLY-ZEROS-AND-ONES 
              (when (or (eql f 0) (eql g 0)) (return-from
simplify-no-redundant-checks 0)) 
              (when (eql f 1) (return-from simplify-no-redundant-checks g)) 
              (when (eql g 1) (return-from simplify-no-redundant-checks f)) 
          REARRANGE-EXPR 
              (when (and (listp g) (eq op (first g))) 
                (let ((op2 (first g)) 
                      (u (second g)) 
                      (v (third g))) 
                  (declare (ignore op2)) 
                  (return-from simplify-no-redundant-checks 
                    (simplify-no-redundant-checks (list op (list op f u)
v))))) 
            MAYBE-CONS-EXPR 
              (if (and (eq f z) (eq g y)) 
                  (return-from simplify-no-redundant-checks xexpr) 
                  (return-from simplify-no-redundant-checks (list op f
g))))))))

Lisp's awful performance only gets worse as your programs get more
complicated. For anything non-trivial, Lisp is inevitably incredibly slow.
Eventually, all Lisp programmers end up Greenspunning features like
optimizing pattern matchers over algebraic data types that are taken for
granted in modern functional languages. As a dynamic language, Lisp cannot
statically check even the most basic of constraints so programmers are
forced to code in the debugger and waste their lives writing unit testing
code.

Fortunately, there are a wealth of much better modern functional programming
languages out there for you to use, like SML, OCaml, Haskell, F# and Scala.
They are not only many times faster than Lisp for real work but also much
more expressive and concise with better tools and much larger communities
of friendly users. If you are interested in earning a living then I
strongly recommend taking a look at Scala and F# because they already have
vastly wealthier markets than Lisp will ever have.

Regards,
-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Rainer Joswig
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <joswig-517337.22304520062008@news-europe.giganews.com>
In article <································@posted.plusnet>,
 Jon Harrop <···@ffconsultancy.com> wrote:

> Ariel wrote:
> > On Fri, 20 Jun 2008 14:32:50 +0100
> > Jon Harrop <···@ffconsultancy.com> wrote:
> >> Robert Maas, http://tinyurl.com/uh3t wrote:
> >> > The whole point of distributing my tasks over 64 processors is to
> >> > try to get it to run 64 times as fast as if running on a single
> >> > processor, to allow a couple orders of magnitude more data to be
> >> > processed within a reasonable time.
> >> 
> >> If you want your programs to run at a reasonable speed you should not be
> >> using Lisp in the first place. So your best bet is probably to rewrite
> >> your code in a modern language and benefit from their enormous
> >> performance improvements.
> > 
> > I thought using a modern compiler for Lisp would allow it to perform as
> > well as any other standard modern day high level language?  (Or so says
> > Paul Graham.)  Is this a false statement?
> 
> That certainly is a false statement, yes. Even for trivial tasks, it can be
> vastly more difficult to write comparably efficient Lisp code. For example,
> this toy symbolic simplifier:

                            ___________________________
                   /|  /|  |                          |
                   ||__||  |       Please don't       |
                  /   O O\__           feed           |
                 /          \       the troll         |
                /      \     \                        |
               /   _    \     \ ---------------------- 
              /    |\____\     \     ||                
             /     | | | |\____/     ||                
            /       \|_|_|/   |    __||                
           /  /  \            |____| ||                
          /   |   | /|        |      --|               
          |   |   |//         |____  --|               
   * _    |  |_|_|_|          |     \-/                
*-- _--\ _ \     //           |                        
  /  _     \\ _ //   |        /                        
*  /   \_ /- | -     |       |                         
  *      ___ c_c_c_C/ \C_c_c_c____________
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.uc2exwe0ut4oq5@pandora.alfanett.no>
P� Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop  
<···@ffconsultancy.com>:

Spoken like someone who has never actually programmed Lisp.

Just earlier this week I made a program to compute prime numbers.
It found all primes < 1000000 in 1.015 seconds.
That is just as fast as the same algorithm in C...

--------------
John Thingstad
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <uO-dnTC9OMcYXsHVnZ2dnUVZ8tPinZ2d@posted.plusnet>
John Thingstad wrote:
> P� Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop
> <···@ffconsultancy.com>:
> 
> Spoken like someone who has never actually programmed Lisp.

Note that I didn't even write the Lisp.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <uO-dnTO9OMe1WcHVnZ2dnUVZ8tPinZ2d@posted.plusnet>
John Thingstad wrote:
> P� Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop
> <···@ffconsultancy.com>:
> 
> Spoken like someone who has never actually programmed Lisp.
> 
> Just earlier this week I made a program to compute prime numbers.
> It found all primes < 1000000 in 1.015 seconds.
> That is just as fast as the same algorithm in C...

Yes, of course. That task is so simple that Lisp's deficiencies are not
relevant.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Ariel
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <20080621052922.0c20ad2d.no@mail.poo>
On Sat, 21 Jun 2008 10:01:20 +0100
Jon Harrop <···@ffconsultancy.com> wrote:

> John Thingstad wrote:
> > På Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop
> > <···@ffconsultancy.com>:
> > 
> > Spoken like someone who has never actually programmed Lisp.
> > 
> > Just earlier this week I made a program to compute prime numbers.
> > It found all primes < 1000000 in 1.015 seconds.
> > That is just as fast as the same algorithm in C...
> 
> Yes, of course. That task is so simple that Lisp's deficiencies are not
> relevant.

Saying that a task is too simple doesn't give your argument any weight, some languages process sleep cycles faster than others, so what.  If instead you had said something like "it avoids these specific slower functions of Lisp" such that to actually show where slowness occurs in the language, it would become a helpful comment.

For example Perl isn't the fastest language to start with, but becomes drastically slower when processing regex.  Thus if speed is your priority over ease of programming, you should avoid regex as much as possible so Perl gains back towards its maximum speed potential.
-a
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <qvaq549dehin8arvu50lfdoflsagjefn4c@4ax.com>
On Sat, 21 Jun 2008 05:29:22 -0700, Ariel <··@mail.poo> wrote:

>On Sat, 21 Jun 2008 10:01:20 +0100
>Jon Harrop <···@ffconsultancy.com> wrote:
>
>> John Thingstad wrote:
>> > P� Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop
>> > <···@ffconsultancy.com>:
>> > 
>> > Spoken like someone who has never actually programmed Lisp.
>> > 
>> > Just earlier this week I made a program to compute prime numbers.
>> > It found all primes < 1000000 in 1.015 seconds.
>> > That is just as fast as the same algorithm in C...
>> 
>> Yes, of course. That task is so simple that Lisp's deficiencies are not
>> relevant.
>
>Saying that a task is too simple doesn't give your argument any weight,
>some languages process sleep cycles faster than others, so what.  
>If instead you had said something like "it avoids these specific slower
>functions of Lisp" such that to actually show where slowness occurs in
>the language, it would become a helpful comment.
>
>For example Perl isn't the fastest language to start with, but becomes
>drastically slower when processing regex.  Thus if speed is your priority
>over ease of programming, you should avoid regex as much as possible so
>Perl gains back towards its maximum speed potential.

Jon Harrop has a bug up his ass about brevity.  His main complaint
about Lisp is not its lack of performance, but the fact that high
performance is not the default setting and his simple, naively written
code doesn't run very fast.

He has very little Lisp experience and his comments regarding Lisp
should simply be ignored.

George
--
for email reply remove "/" from address
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <8rydnVm0L_Djs8PVnZ2dneKdnZydnZ2d@posted.plusnet>
George Neuner wrote:
> He has very little Lisp experience and his comments regarding Lisp
> should simply be ignored.

Even if my comments are about objective measurements of other people's Lisp
code?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Pascal Costanza
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <6c7d12F3ema54U1@mid.individual.net>
Jon Harrop wrote:
> George Neuner wrote:
>> He has very little Lisp experience and his comments regarding Lisp
>> should simply be ignored.
> 
> Even if my comments are about objective measurements of other people's Lisp
> code?

...but they aren't objective, which has been shown a thousand times in 
this newsgroup.

Buzz off.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <4LidnSZERoLY4sPVnZ2dnUVZ8s3inZ2d@posted.plusnet>
Pascal Costanza wrote:
> Jon Harrop wrote:
>> George Neuner wrote:
>>> He has very little Lisp experience and his comments regarding Lisp
>>> should simply be ignored.
>> 
>> Even if my comments are about objective measurements of other people's
>> Lisp code?
> 
> ...but they aren't objective, which has been shown a thousand times in
> this newsgroup.

Measurements of program length and speed are objective. Your objections on
religious grounds in this newsgroup are not objective.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Pascal Costanza
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <6c7ehhF3euk7fU1@mid.individual.net>
Jon Harrop wrote:
> Pascal Costanza wrote:
>> Jon Harrop wrote:
>>> George Neuner wrote:
>>>> He has very little Lisp experience and his comments regarding Lisp
>>>> should simply be ignored.
>>> Even if my comments are about objective measurements of other people's
>>> Lisp code?
>> ...but they aren't objective, which has been shown a thousand times in
>> this newsgroup.
> 
> Measurements of program length and speed are objective. Your objections on
> religious grounds in this newsgroup are not objective.

LOL

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <irednZ_TiY9qt8DVnZ2dnUVZ8tPinZ2d@posted.plusnet>
Ariel wrote:
> On Sat, 21 Jun 2008 10:01:20 +0100
> Jon Harrop <···@ffconsultancy.com> wrote:
>> John Thingstad wrote:
>> > Just earlier this week I made a program to compute prime numbers.
>> > It found all primes < 1000000 in 1.015 seconds.
>> > That is just as fast as the same algorithm in C...
>> 
>> Yes, of course. That task is so simple that Lisp's deficiencies are not
>> relevant.
> 
> Saying that a task is too simple doesn't give your argument any weight,
> some languages process sleep cycles faster than others, so what.  If
> instead you had said something like "it avoids these specific slower
> functions of Lisp" such that to actually show where slowness occurs in the
> language, it would become a helpful comment.

My original point was that Lisp makes high-level programming slow. For
example, high-level abstract constructs like pattern matching are heavily
optimized by the compilers of all modern functional languages but Lisp is
incapable of anything comparable (without drastically changing the language
and Greenspunning modern language features).

John's response that Lisp can compete with C on a low-level program is true
for trivial problems but neither relevant to my point nor interesting in
the context of real programming (unless you are trying to solve problems so
simple that it is feasible for you to use C).

Fortunately, modern functional languages are so much more effective than
Lisp that you don't have to study much more complicated programs (e.g. the
Mersenne Twister rather than a prime sieve) to appreciate just how far in
advance these modern languages are.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Ariel
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <20080621110448.4b8785f8.no@mail.poo>
On Sat, 21 Jun 2008 17:20:01 +0100
Jon Harrop <···@ffconsultancy.com> wrote:

> Ariel wrote:
> > On Sat, 21 Jun 2008 10:01:20 +0100
> > Jon Harrop <···@ffconsultancy.com> wrote:
> >> John Thingstad wrote:
> >> > Just earlier this week I made a program to compute prime numbers.
> >> > It found all primes < 1000000 in 1.015 seconds.
> >> > That is just as fast as the same algorithm in C...
> >> 
> >> Yes, of course. That task is so simple that Lisp's deficiencies are not
> >> relevant.
> > 
> > Saying that a task is too simple doesn't give your argument any weight,
> > some languages process sleep cycles faster than others, so what.  If
> > instead you had said something like "it avoids these specific slower
> > functions of Lisp" such that to actually show where slowness occurs in the
> > language, it would become a helpful comment.
> 
> My original point was that Lisp makes high-level programming slow. For
> example, high-level abstract constructs like pattern matching are heavily
> optimized by the compilers of all modern functional languages but Lisp is
> incapable of anything comparable (without drastically changing the language
> and Greenspunning modern language features).
> 
> John's response that Lisp can compete with C on a low-level program is true
> for trivial problems but neither relevant to my point nor interesting in
> the context of real programming (unless you are trying to solve problems so
> simple that it is feasible for you to use C).
> 
> Fortunately, modern functional languages are so much more effective than
> Lisp that you don't have to study much more complicated programs (e.g. the
> Mersenne Twister rather than a prime sieve) to appreciate just how far in
> advance these modern languages are.

This post just lost you all credibility in my eyes.  Good day, Jon Harrop.
-a
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008jun21-001@yahoo.com>
> From: Jon Harrop <····@ffconsultancy.com>
> My original point was that Lisp makes high-level programming
> slow. For example, high-level abstract constructs like pattern
> matching are heavily optimized by the compilers of all modern
> functional languages but Lisp is incapable of anything comparable
> (without drastically changing the language and Greenspunning modern
> language features).

If you haven't done so yet, please write a Web page that defines
precisely what you mean by "pattern matching", including all
API-use-cases that you require.

Then post the URL for that Web page in this thread so that we can
all see what you're talking about.

For example, are you talking about the following API-use-case?
Given: A list of records (each a string), a list of BNF production rules,
        and the name (leftside) of the toplevel BNF production rule.
Task: Try to parse each record per that toplevel BNF production
       rule, using the other rules to analyze sub-strings in the
       usual (recursive) way.
Return: List of parse trees, with a null value (NIL in Lisp) for
         any records that didn't parse successfully. If the same
         record can be parsed in more than one way per the same
         BNF, return any one of the parse trees, I don't care
         which, any is good enough.
Suggestion: Automatically compile the BNF into some efficient
             algorithm, then map that algorithm down the list of records.

Or are you talking about this other API-use-case?
Given: A list of computer-data representations of mathematical
        objects, a mathematical predicate expressed in first-order
        logic.
Task: Apply that predicate to each object in the list.
Return: List of formal claims as to whether each object satisfies
         the predicate or fails to satisfy the predicate. Each such
         claim is effectively a game strategy for always winning
         after choosing the correct side of the argument/game.
Suggestion: Ask John McCarthy for referrals to the best A.I.
             experts able to suggest ways to produce an expert/A.I.
             system for this kind of task.

Or are you talking about this other API-use-case?
Given: A list of spam (Unsolicited Commercial/Bulk E-mail) texts
        (including full header), and a numeric threshold for
        correlation between a record and a pattern.
Task: Find all significant patterns, such as Nigerian 419 spam
       which all claims somebody died in a plane crash or other
       disaster and asks for help getting large amounts of money
       out of the country.
Return: A relational database which identifies each pattern that
         was found (by a table matching pattern-IDnumber to an SQL
         description of the defining characteristics of the
         centroid of that pattern), and correlates the various
         patterns against the various records according to how well
         each record satisfies that pattern, where 0 means no match
         at all and 1 means exact match to the centroid of that
         pattern (by a 2-d sparse matrix between record-IDnumbers
         and pattern-IDnumbers, omitting any elements that are
         below the threshold, by whatever means you feel is best).
Suggestion: The sparse matrix could be a table with three columns:
             record-IDnumber pattern-IDnumber correlationCoefficient

What other API-use-cases do you require, which *other* languages
than Lisp can do, and do more efficiently than Lisp even if Lisp
can do it in the first place. (Also include API-use-cases which you
admit LIsp can do about equally as fast as other languages, if you
include such API-use-cases in your meaning of "pattern matching".)

> John's response that Lisp can compete with C on a low-level
> program is true for trivial problems but neither relevant to my
> point ...

Apparently you don't believe in TDD (Test-Driven Devleopment).
Whatever the customer asks for specifically, you provide the
minimal response to that, then the customer complains that's not
sufficient and the customer then provides additional constraints,
and the program generates a new program to satisfy those additional
constraints, etc., back and forth, until the customer has finally
said what he really wanted in the first place and the software now
provides that. You said Lisp is too slow, but didn't give any idea
what kind of task it's too slow for. So the other poster gave an
example of a task that is just as fast in Lisp as in C. So now it's
your job to try to say exactly what you are complaining about, so
that we have a more precise target for our TDD. The ball is in your
court. Your vague comment about "pattern matching" is of no help in
driving us to work harder to prove Lisp is fast enough. To my mind,
producing a list of all integers within some interval is in essence
a generator of all integers in that interval feeding through a
filter that performs a pattern-matching algorithm, namely matching
the mathematical pattern of being composite (x = a*b where 1<a<=b),
and then taking the complement of that set, i.e. producing the
subset-sequence that *fails* the pattern. If you mean something
more complicated as "pattern matching", you need to say precisely
what you mean.

Put up or shut up.
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <8rydnVi0L_ATrcPVnZ2dneKdnZydnZ2d@posted.plusnet>
Robert Maas, http://tinyurl.com/uh3t wrote:
> If you mean something more complicated as "pattern matching", you need to
> say precisely what you mean.

I was referring to ML-style pattern matching (including dynamic dispatch
over nested algebraic data types) as seen in F#, OCaml, Haskell and SML.

Most of the functionality of the symbolic simplifier I cited is encoded in
the pattern matching:

let rec ( +: ) f g = match f, g with 
   | `Int n, `Int m -> `Int (n +/ m) 
   | `Int (Int 0), e | e, `Int (Int 0) -> e 
   | f, `Add(g, h) -> f +: g +: h 
   | f, g -> `Add(f, g) 
let rec ( *: ) f g = match f, g with 
   | `Int n, `Int m -> `Int (n */ m) 
   | `Int (Int 0), e | e, `Int (Int 0) -> `Int (Int 0) 
   | `Int (Int 1), e | e, `Int (Int 1) -> e 
   | f, `Mul(g, h) -> f *: g *: h 
   | f, g -> `Mul(f, g) 
let rec simplify = function 
   | `Int _ | `Var _ as f -> f 
   | `Add (f, g) -> simplify f +: simplify g 
   | `Mul (f, g) -> simplify f *: simplify g

All three functions just dispatch over their arguments using pattern matches
("match .. with ..." or "function ..."). The values they are matching are
algebraic datatypes and integers in this case.

Run ocaml with the "-dlambda" option and you can see the optimized Lisp-like
intermediate representation that OCaml's pattern match compiler generates:

(seq
  (letrec
    (+:/74
       (function f/75 g/76
         (catch
           (catch
             (if (isint f/75) (exit 3)
               (if (!= (field 0 f/75) 3654863) (exit 3)
                 (let (n/77 (field 1 f/75))
                   (catch
                     (if (isint g/76) (exit 4)
                       (if (!= (field 0 g/76) 3654863) (exit 4)
                         (makeblock 0 3654863
                           (apply (field 0 (global Num!)) n/77
                             (field 1 g/76)))))
                    with (4)
                     (switch n/77
                      case tag 0: (if (!= (field 0 n/77) 0) (exit 3) g/76)
                      default: (exit 3))))))
            with (3)
             (if (isint g/76) (exit 2)
               (let (variant/116 (field 0 g/76))
                 (if (!= variant/116 3254785)
                   (if (!= variant/116 3654863) (exit 2)
                     (let (match/113 (field 1 g/76))
                       (switch match/113
                        case tag 0:
                         (if (!= (field 0 match/113) 0) (exit 2) f/75)
                        default: (exit 2))))
                   (let (match/115 (field 1 g/76))
                     (apply +:/74 (apply +:/74 f/75 (field 0 match/115))
                       (field 1 match/115)))))))
          with (2) (makeblock 0 3254785 (makeblock 0 f/75 g/76)))))
    (apply (field 1 (global Toploop!)) "+:" +:/74))
  (letrec
    (*:/86
       (function f/87 g/88
         (catch
           (catch
             (catch
               (catch
                 (catch
                   (if (isint f/87) (exit 10)
                     (if (!= (field 0 f/87) 3654863) (exit 10)
                       (let (n/89 (field 1 f/87))
                         (catch
                           (if (isint g/88) (exit 11)
                             (if (!= (field 0 g/88) 3654863) (exit 11)
                               (makeblock 0 3654863
                                 (apply (field 5 (global Num!)) n/89
                                   (field 1 g/88)))))
                          with (11)
                           (switch n/89
                            case tag 0:
                             (if (!= (field 0 n/89) 0) (exit 10) (exit 5))
                            default: (exit 10))))))
                  with (10)
                   (if (isint g/88) (exit 9)
                     (if (!= (field 0 g/88) 3654863) (exit 9)
                       (let (match/124 (field 1 g/88))
                         (switch match/124
                          case tag 0:
                           (if (!= (field 0 match/124) 0) (exit 9) (exit 5))
                          default: (exit 9))))))
                with (9)
                 (if (isint f/87) (exit 8)
                   (if (!= (field 0 f/87) 3654863) (exit 8)
                     (let (match/127 (field 1 f/87))
                       (switch match/127
                        case tag 0:
                         (if (!= (field 0 match/127) 1) (exit 8) g/88)
                        default: (exit 8))))))
              with (8)
               (if (isint g/88) (exit 7)
                 (let (variant/133 (field 0 g/88))
                   (if (!= variant/133 3654863)
                     (if (!= variant/133 3855332) (exit 7)
                       (let (match/132 (field 1 g/88))
                         (apply *:/86 (apply *:/86 f/87 (field 0 match/132))
                           (field 1 match/132))))
                     (let (match/130 (field 1 g/88))
                       (switch match/130
                        case tag 0:
                         (if (!= (field 0 match/130) 1) (exit 7) f/87)
                        default: (exit 7)))))))
            with (7) (makeblock 0 3855332 (makeblock 0 f/87 g/88)))
          with (5) [0: 3654863 [0: 0]])))
    (apply (field 1 (global Toploop!)) "*:" *:/86))
  (let
    (*:/86 (apply (field 0 (global Toploop!)) "*:")
     +:/74 (apply (field 0 (global Toploop!)) "+:"))
    (letrec
      (simplify/100
         (function f/101
           (let (variant/138 (field 0 f/101))
             (if (!= variant/138 3855332)
               (if (>= variant/138 3254786) f/101
                 (let (match/134 (field 1 f/101))
                   (apply +:/74 (apply simplify/100 (field 0 match/134))
                     (apply simplify/100 (field 1 match/134)))))
               (let (match/135 (field 1 f/101))
                 (apply *:/86 (apply simplify/100 (field 0 match/135))
                   (apply simplify/100 (field 1 match/135))))))))
      (apply (field 1 (global Toploop!)) "simplify" simplify/100))))

Consider the enormous waste of time and effort involved in maintaining Lisp
code like this. In practice, Lispers just give up, write naive code and
kiss goodbye to performance. That is precisely what I was alluding to.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Patrick May
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <m23an5bil2.fsf@spe.com>
Jon Harrop <···@ffconsultancy.com> writes:
[ . . . ]
> Consider the enormous waste of time and effort involved in
> maintaining Lisp code like this. In practice, Lispers just give up,
> write naive code and kiss goodbye to performance. That is precisely
> what I was alluding to.

     Before you wrote this I thought you were either just an ignorant
F# fanboy or particularly bad at marketing your consulting services.
Now I'm convinced you're a deliberate liar.

     I'd point you to several high performance regular expression
libraries for Common Lisp, but you're clearly not interested in
rational discussion.  If your goal is to encourage adoption of F#,
you're failing miserably.  Your behavior in this newsgroup alone has
eliminated any interest I might have had in investigating it.  I
suspect I'm not alone in this view.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          ···@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <mYudnXSMy5Ir7cPVnZ2dnUVZ8qjinZ2d@posted.plusnet>
Patrick May wrote:
> I'd point you to several high performance regular expression libraries...

This is about dispatch over algebraic datatypes so regular expressions are
irrelevant.

> If your goal is to encourage adoption of F#, you're failing miserably. 
> Your behavior in this newsgroup alone has eliminated any interest I might
> have had in investigating it.  I suspect I'm not alone in this view.  

I've led you to water but I can't make you drink.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <r72t545ubu86ckekbdcncvaln22csflvq5@4ax.com>
On Sun, 22 Jun 2008 16:29:59 +0100, Jon Harrop <···@ffconsultancy.com>
wrote:

>Patrick May wrote:
>> I'd point you to several high performance regular expression libraries...
>
>This is about dispatch over algebraic datatypes so regular expressions are
>irrelevant.

Since Lisp doesn't have algebraic datatypes, you need to compare the
closest equivalent - generic function dispatch.  Not only is that
heavily optimized in most Lisps, but it directly corresponds to ML's
pattern matching alternation.

I can't recall for certain, but I don't think any of your Lisp
micro-benchmarks focused on generic functions.


>> If your goal is to encourage adoption of F#, you're failing miserably. 
>> Your behavior in this newsgroup alone has eliminated any interest I might
>> have had in investigating it.  I suspect I'm not alone in this view.  
>
>I've led you to water but I can't make you drink.

We'll drink when you turn the water into wine.

George
--
for email reply remove "/" from address
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <fNWdnQOb163bCsPVRVnyvAA@posted.plusnet>
George Neuner wrote:
> On Sun, 22 Jun 2008 16:29:59 +0100, Jon Harrop <···@ffconsultancy.com>
> wrote:
>>Patrick May wrote:
>>> I'd point you to several high performance regular expression
>>> libraries...
>>
>>This is about dispatch over algebraic datatypes so regular expressions are
>>irrelevant.
> 
> Since Lisp doesn't have algebraic datatypes, you need to compare the
> closest equivalent - generic function dispatch.

Pascal Costanza's Lisp implementation of the symbolic simplifier used Lisp's
generic function dispatch as a poor man's pattern matcher:

  http://www.lambdassociates.org/studies/study10.htm

> Not only is that heavily optimized in most Lisps,

Pascal's code is several times slower than the OCaml.

> but it directly corresponds to ML's pattern matching alternation.

Actually Pascal's code demonstrated one of the ways that generic function
dispatch is less powerful. Specifically, this OCaml:

   | `Int (Int 0), e | e, `Int (Int 0) -> e 

becomes this Lisp:
 
    (:method ((x (eql 0)) y) y)
    (:method (x (y (eql 0))) x)

because generic function dispatch cannot handle or-patterns. The flaw
repeats here:

   | `Int (Int 0), e | e, `Int (Int 0) -> `Int (Int 0) 

    (:method ((x (eql 0)) y) 0) 
    (:method (x (y (eql 0))) 0) 

and here:

   | `Int (Int 1), e | e, `Int (Int 1) -> e 

    (:method ((x (eql 1)) y) y) 
    (:method (x (y (eql 1))) x) 

Use more nesting (which is very common when pattern matching is available)
and gap between OCaml and Lisp widens very quickly.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Kenny
Subject: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485ef865$0$7363$607ed4bc@cv.net>
Ok, so I am sitting here looking at this killer Algebra application now 
that I just gave it a wicked makeover and I need medals for the kids, 
gold, silver, and bronze depending on how they do one each mission (you 
know, quiz). So I need to know the three greatest mathematicians, or 
something like that.

Candidates?

I already have a great JPG of Einstein -- gotta go with the recognition 
factor -- and I have another great image of a mathematician I pretty 
much have to use (McCarthy) so that only leaves one slot and it would be 
nice to show respect for women and have a babe in there so of course I 
am thinking Anna Kournikova... at some point tho I might veer 
uncharacteristically towards sanity and want a sorted list of the 
greatest mathematicians, mebbe more than three, so... who are they?

btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.

kt
From: Brian
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <4aae3312-9c5a-4550-a64f-4497b82059e8@y21g2000hsf.googlegroups.com>
On Jun 22, 8:11 pm, Kenny <·········@gmail.com> wrote:
> Ok, so I am sitting here looking at this killer Algebra application now
> that I just gave it a wicked makeover and I need medals for the kids,
> gold, silver, and bronze depending on how they do one each mission (you
> know, quiz). So I need to know the three greatest mathematicians, or
> something like that.
>
> Candidates?
Euler?
From: Leandro Rios
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485efd94$0$1351$834e42db@reader.greatnowhere.com>
Kenny escribi�:
> Ok, so I am sitting here looking at this killer Algebra application now 
> that I just gave it a wicked makeover and I need medals for the kids, 
> gold, silver, and bronze depending on how they do one each mission (you 
> know, quiz). So I need to know the three greatest mathematicians, or 
> something like that.
> 
> Candidates?
> 
Maybe this helps:

http://james.fabpedigree.com/mathmen.htm

Leandro
From: Kenny
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485f1102$0$11606$607ed4bc@cv.net>
Leandro Rios wrote:
> Kenny escribi�:
>> Ok, so I am sitting here looking at this killer Algebra application 
>> now that I just gave it a wicked makeover and I need medals for the 
>> kids, gold, silver, and bronze depending on how they do one each 
>> mission (you know, quiz). So I need to know the three greatest 
>> mathematicians, or something like that.
>>
>> Candidates?
>>
> Maybe this helps:
> 
> http://james.fabpedigree.com/mathmen.htm
> 
> Leandro

Oh, man, good call, reminds me (tho that lists him only as an also-ran): 
Muhammed ibn Musa al-Khowarizmi! He wrote the book on Algebra! 
Literally. Not sure where I can get a picture, tho. Maybe I can give 
Anna a beard...

Thx!

kt
From: Sohail Somani
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <o5F7k.727$yg7.581@edtnps82>
Kenny wrote:
> Leandro Rios wrote:
>> Kenny escribi�:
>>> Ok, so I am sitting here looking at this killer Algebra application 
>>> now that I just gave it a wicked makeover and I need medals for the 
>>> kids, gold, silver, and bronze depending on how they do one each 
>>> mission (you know, quiz). So I need to know the three greatest 
>>> mathematicians, or something like that.
>>>
>>> Candidates?
>>>
>> Maybe this helps:
>>
>> http://james.fabpedigree.com/mathmen.htm
>>
>> Leandro
> 
> Oh, man, good call, reminds me (tho that lists him only as an also-ran): 
> Muhammed ibn Musa al-Khowarizmi! He wrote the book on Algebra! 
> Literally. Not sure where I can get a picture, tho. Maybe I can give 
> Anna a beard...

That would be a nice nod!
From: Sohail Somani
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485F1E46.7030300@taggedtype.net>
Kenny wrote:
> Leandro Rios wrote:
>> Kenny escribi�:
>>> Ok, so I am sitting here looking at this killer Algebra application 
>>> now that I just gave it a wicked makeover and I need medals for the 
>>> kids, gold, silver, and bronze depending on how they do one each 
>>> mission (you know, quiz). So I need to know the three greatest 
>>> mathematicians, or something like that.
>>>
>>> Candidates?
>>>
>> Maybe this helps:
>>
>> http://james.fabpedigree.com/mathmen.htm
>>
>> Leandro
> 
> Oh, man, good call, reminds me (tho that lists him only as an also-ran): 
> Muhammed ibn Musa al-Khowarizmi! He wrote the book on Algebra! 
> Literally. Not sure where I can get a picture, tho. Maybe I can give 
> Anna a beard...
> 
> Thx!
> 
> kt

Also this guys story intrigued me to no end:

http://james.fabpedigree.com/mathmen.htm#Ramanujan

I suggest using a random number generator.
From: Kenny
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485f9812$0$7322$607ed4bc@cv.net>
Sohail Somani wrote:
> Kenny wrote:
>> Leandro Rios wrote:
>>> Kenny escribi�:
>>>> Ok, so I am sitting here looking at this killer Algebra application 
>>>> now that I just gave it a wicked makeover and I need medals for the 
>>>> kids, gold, silver, and bronze depending on how they do one each 
>>>> mission (you know, quiz). So I need to know the three greatest 
>>>> mathematicians, or something like that.
>>>>
>>>> Candidates?
>>>>
>>> Maybe this helps:
>>>
>>> http://james.fabpedigree.com/mathmen.htm
>>>
>>> Leandro
>>
>> Oh, man, good call, reminds me (tho that lists him only as an 
>> also-ran): Muhammed ibn Musa al-Khowarizmi! He wrote the book on 
>> Algebra! Literally. Not sure where I can get a picture, tho. Maybe I 
>> can give Anna a beard...
>>
>> Thx!
>>
>> kt
> 
> Also this guys story intrigued me to no end:
> 
> http://james.fabpedigree.com/mathmen.htm#Ramanujan

Yes, that was pretty cool. Sad ending.

kt
From: Sohail Somani
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <LYQ7k.854$yg7.356@edtnps82>
Kenny wrote:
> Sohail Somani wrote:
>> Also this guys story intrigued me to no end:
>>
>> http://james.fabpedigree.com/mathmen.htm#Ramanujan
> 
> Yes, that was pretty cool. Sad ending.

Turing too. Maybe you should just have a picture of a lot of people and 
let them choose along with a bio of each guy.
From: Thomas F. Burdick
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <816d1d2d-bd38-4a0f-8030-81ff4d48da96@a9g2000prl.googlegroups.com>
On Jun 23, 4:57 am, Kenny <·········@gmail.com> wrote:
> Leandro Rios wrote:
> > Kenny escribió:
> >> Ok, so I am sitting here looking at this killer Algebra application
> >> now that I just gave it a wicked makeover and I need medals for the
> >> kids, gold, silver, and bronze depending on how they do one each
> >> mission (you know, quiz). So I need to know the three greatest
> >> mathematicians, or something like that.
>
> >> Candidates?
>
> > Maybe this helps:
>
> >http://james.fabpedigree.com/mathmen.htm
>
> > Leandro
>
> Oh, man, good call, reminds me (tho that lists him only as an also-ran):
> Muhammed ibn Musa al-Khowarizmi! He wrote the book on Algebra!
> Literally. Not sure where I can get a picture, tho. Maybe I can give
> Anna a beard...

Google image search turns up a pretty neat looking postage stamp.
Also, François Viète looks kinda amusing and popularized a convention
rather dear to your subject.
From: tortoise
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <688eb478-bd4a-48e9-b11b-6c1febfbc792@v26g2000prm.googlegroups.com>
Kenny wrote:
> Ok, so I am sitting here looking at this killer Algebra application now
> that I just gave it a wicked makeover and I need medals for the kids,
> gold, silver, and bronze depending on how they do one each mission (you
> know, quiz). So I need to know the three greatest mathematicians, or
> something like that.
>
> Candidates?
>
> I already have a great JPG of Einstein -- gotta go with the recognition
> factor -- and I have another great image of a mathematician I pretty
> much have to use (McCarthy) so that only leaves one slot and it would be
> nice to show respect for women and have a babe in there so of course I
> am thinking Anna Kournikova... at some point tho I might veer
> uncharacteristically towards sanity and want a sorted list of the
> greatest mathematicians, mebbe more than three, so... who are they?
>
> btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.
>
> kt

Kurt Goedel invented the lambda calculus 20 years before McCarthy.

Also supposedly ruined computer science by helping to establish
limits of computability (seems to be an abstract math thing that may
not be reality -- but modern computers do seem to be classical logic
machines unfortunately).
From: Kenny
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485f99d2$0$7350$607ed4bc@cv.net>
tortoise wrote:
> 
> Kenny wrote:
>> Ok, so I am sitting here looking at this killer Algebra application now
>> that I just gave it a wicked makeover and I need medals for the kids,
>> gold, silver, and bronze depending on how they do one each mission (you
>> know, quiz). So I need to know the three greatest mathematicians, or
>> something like that.
>>
>> Candidates?
>>
>> I already have a great JPG of Einstein -- gotta go with the recognition
>> factor -- and I have another great image of a mathematician I pretty
>> much have to use (McCarthy) so that only leaves one slot and it would be
>> nice to show respect for women and have a babe in there so of course I
>> am thinking Anna Kournikova... at some point tho I might veer
>> uncharacteristically towards sanity and want a sorted list of the
>> greatest mathematicians, mebbe more than three, so... who are they?
>>
>> btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.
>>
>> kt
> 
> Kurt Goedel invented the lambda calculus 20 years before McCarthy.

Yeah, but I am more impressed by the guy who invented the Frisbee than 
by the Wright brothers.

But you remind me: Turing! Doh!

Clearly we are now looking at a series of medallions for different 
subskills, with the same medallion in gold, silver, or bronze based on 
score.

Anyway, I think they'll be more interested in the prize money...

kt
From: Pascal J. Bourguignon
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <7cbq1sppjp.fsf@pbourguignon.anevia.com>
tortoise <··········@gmail.com> writes:

> Kenny wrote:
>> btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.
>
> Kurt Goedel invented the lambda calculus 20 years before McCarthy.

No, it was Alonso Church who invented Lambda Calculus.

Kurt G�del worked on formal systems.

> Also supposedly ruined computer science by helping to establish
> limits of computability (seems to be an abstract math thing that may
> not be reality -- but modern computers do seem to be classical logic
> machines unfortunately).


-- 
__Pascal Bourguignon__
From: tortoise
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <b35bb171-96c6-464f-89f2-c59cc2839cb7@g16g2000pri.googlegroups.com>
tortoise wrote:
> Kenny wrote:
> > Ok, so I am sitting here looking at this killer Algebra application now
> > that I just gave it a wicked makeover and I need medals for the kids,
> > gold, silver, and bronze depending on how they do one each mission (you
> > know, quiz). So I need to know the three greatest mathematicians, or
> > something like that.
> >
> > Candidates?
> >

>t
>
> Kurt Goedel invented the lambda calculus 20 years before McCarthy.

correction: KG invented *lisp* 20 years before McCarthy, and by
makeing
an innovation in the lambda calculus (see Douglas Hofstadter,
Mathemagical Themas)


>
> Also supposedly ruined computer science by helping to establish
> limits of computability (seems to be an abstract math thing that may
> not be reality -- but modern computers do seem to be classical logic
> machines unfortunately).

clarification: he set us up for all the current mess of computer
languages and limitations.
ok, maybe it wasn't all his fault, it was really a warning but it was
so far ahead of its
time...

although many people may believe or not in the mathematical theory of
computer
science, its really about the physics of our computer designs and does
not explain
very much at all about how programs really manage to work and how to
proceed
in the future as a good theory should do (see Brian Cantwell Smith, On
the Origin of Objects)

Just my perspective. I still think lisp should beat c++ by a mile, but
frustrations
at its size and complexity still and scheme is too limited, I am
thinking I should
try and work with PG's ARC ??
From: Aatu Koskensilta
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <827ic9jk2d.fsf@A166.veli3.tontut.fi>
tortoise <··········@gmail.com> writes:

> tortoise wrote:
>
> > Kurt Goedel invented the lambda calculus 20 years before McCarthy.
> 
> correction: KG invented *lisp* 20 years before McCarthy, and by
> makeing an innovation in the lambda calculus (see Douglas
> Hofstadter, Mathemagical Themas)

You're mistaken. In reality, Kurt G�del invented inflammable broccoli
yoghurt in 1657. Lambda calculus was devised by the redoubtable Alonzo
Church a few weeks later.

-- 
Aatu Koskensilta (················@uta.fi)

"Wovon man nicht sprechen kann, dar�ber muss man schweigen"
 - Ludwig Wittgenstein, Tractatus Logico-Philosophics
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <rem-2008jun30-007@yahoo.com>
> Date: Mon, 23 Jun 2008 12:52:34 +0530
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: Madhu <·······@meer.net>
> No, Alonzo Church (JmC's advisor), and Kleene, are credited with the
> lambda calculus, in the 30s.

Right. JMC's innovative idea was that the lambda calculus might
make a good bootstrap toward a completely general-purpose
programming language that would be easy to bootstrap yet totally
general in functionality as well as make proofs of correctness
follow directly from mathematical proofs in the lambda calculus.
(That's my guess as to the importance of his original idea. Of
 course when it was actually implemented it became even more
 valuable, in a more practical sense, than he had forseen.)

> Almost all modern computers are von-neumman machines --- the
> architecture is credited to John von neumman, ...

Yet assembly/machine language and Lisp are the only two languages
that take full advantage of this architecture where program equals
data in regard to ability to build it and modify it in useful ways.
Virtually every other programming language treats program and
data as two separate parts of memory, where program can:
- manipulate data but not manipulate the executable program itself
- jump to locations in program space but not jump to anywhere in data space
and where the only practical way to generate program then execute
it is to:
- generate it as data and write to disk, then
- load it back in as executable program later in a separate "core image".
This model of executable-code separate from modifiable-data is more
like the Harvard architecture.
From: Tim Bradshaw
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <25b1eff8-06bd-4171-b113-2ab4973cd404@c58g2000hsc.googlegroups.com>
On Jun 30, 9:00 am, ··················@spamgourmet.com.remove (Robert
Maas, http://tinyurl.com/uh3t) wrote:

> This model of executable-code separate from modifiable-data is more
> like the Harvard architecture.

It is, and that's because the alternative is horrible for high-
performance systems, which want to be able to make aggressive
assumptions about the cacheability of code. All this, of course, is
because real modern machines are actually not really von Neumann
machines at all, but something much more complicated, which manage to
do a reasonable (but not always perfect) job of looking to the
programmer like a von Neumann machine.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <rem-2008jul20-004@yahoo.com>
> > This model of executable-code separate from modifiable-data is more
> > like the Harvard architecture.
> From: Tim Bradshaw <··········@tfeb.org>
> It is, and that's because the alternative is horrible for high-
> performance systems, which want to be able to make aggressive
> assumptions about the cacheability of code. All this, of course, is
> because real modern machines are actually not really von Neumann
> machines at all, but something much more complicated, which manage
> to do a reasonable (but not always perfect) job of looking to the
> programmer like a von Neumann machine.

What you write is even more enlightening than what I wrote.
IMO any course of instruction for computer programming ought to
first present both the Von Neumann ad Harvard models, and discuss
how they present different practical issues, such as the question
as to whether code is data (the essential presumption of Lisp
and machine language programming) or not (the essential presumption
of languages that prohibit enhancing the set of executable code
once execution has already started). Then your point should be
raised next, that things ain't so simple as black and white, and
discussed at various levels:
- at the machine level (your specific point)
- at the operating-system virtual-memory level (where different
   applications can share read-only pages, but as soon as one
   application writes to such a page, that page must be unshared)
- at the high-level programming point (such as Java where Class
   files must be compiled from source, but they *can* be loaded
   into the JVM at any time even after execution has started, but
   due to CPU and OS/VM restrictions above, a trick must be used to
   switch a page from "data" to "instruction" after loading before
   execution, even though in fact *all* Java code is "data" because
   it's running in the JVM not the CPU directly)
Actually that last complication leads me to a follow-up question:
Are modern CPUs able to make aggressive assumptions about the
cacheability of "bytecode" that is interpreted by the JVM just the
same as they do about true machine code that is executed directly
on the CPU? Or does the JVM methodology defeat the
modern-CPU-assumption mechanism so that *only* the innerds of the
JVM-interpretor itself can be aggressively cached, but tight loops
within interpreted bytecode programs (compiled Java code) are *not*
aggressively cached because the CPU doesn't "know" they are code,
the CPU believes they are "just data" and hence not presumed to be
worthy of aggressive predictions of cacheability? Or do *really*
modern CPUs implement predictive-cacheability of both code and data
separately, thus effectively cache both the JVM-interepretor
(machine code) and JVM-bytecode applications ("just data")? I.e.
are *really* modern CPUs able to recognize the design pattern of
data that is acting as if it were instructions?

Hmm, that latter possibility brings up a new model: hybrid
multi-level Von Neumann and Harvard machines, whereby at one level
of abstraction only the micro-code is code, everything else
(including CPU code) is data, and at the next level only the CPU
code (including JVM-interpretor) is code and everything else is
data, and at the next level only the bytecode is code and
everything else is data. No limit to the number of different places
where a sort of code/data distinction is made, where "code" is kept
in one cache per one optimization heuristic and "data" is kept out
of that cache or at least treated with a different heuristic.

Maybe I'm thinking too deep on something I actually know just a
little about, never having even *seen* the formal specification of
any CPU instruction set more recent than 68000, so I'll stop at
this point.
(OT: Interesting program yesterday on NPR (KQED-FM) about that
 general topic of talking from little knowledge. At least I include
 a caveat as to my lack of deep knowledge in a topic, and post my
 ideas as speculation or thinking rather than as fact, unlike the
 "A.J." (obscene word there) in the NPR program.
 File that you are currently viewing
   Linkname: KQED | public radio 88.5 and 89.3: daily schedule
        URL: http://www.kqed.org/radio/daily-schedule.jsp?Month=7&Date=19&Year=2008&Format=long
 Link that you currently have selected
   Linkname: This American Life
        URL: http://www.kqed.org/programs/program-landing.jsp?progID=RD47
   A Little Bit of Knowledge -- The show hears the story of an
   electrician who thought he could disprove Einstein, and others about
   the pitfalls of knowing just a little too little.
 )

P.S. Maybe it's a good thing I'm so tardy in responding to your
article, or I wouldn't have already listened to that NPR program at
the time of my reply, and consequently I wouldn't have been able to
include that link.
From: Vassil Nikolov
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <snz8wvti9d9.fsf@luna.vassil.nikolov.name>
On Sun, 20 Jul 2008 08:58:02 -0700, ··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) said:
| ...
| Are modern CPUs able to make aggressive assumptions about the
| cacheability of "bytecode" that is interpreted by the JVM just the
| same as they do about true machine code that is executed directly
| on the CPU? Or does the JVM methodology defeat the
| modern-CPU-assumption mechanism so that *only* the innerds of the
| JVM-interpretor itself can be aggressively cached, but tight loops
| within interpreted bytecode programs (compiled Java code) are *not*
| aggressively cached because the CPU doesn't "know" they are code,
| the CPU believes they are "just data" and hence not presumed to be
| worthy of aggressive predictions of cacheability? Or do *really*
| modern CPUs implement predictive-cacheability of both code and data
| separately, thus effectively cache both the JVM-interepretor
| (machine code) and JVM-bytecode applications ("just data")? I.e.
| are *really* modern CPUs able to recognize the design pattern of
| data that is acting as if it were instructions?

  For the JVM, it seems to me that the above is a non-issue, or at
  least that so far it hasn't mattered much, because the JIT compiler
  turns the "data" into "real code" anyway before unleashing the CPU
  on it.

  ---Vassil.


-- 
Peius melius est.  ---Ricardus Gabriel.
From: Rob Warnock
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <L_udnWt8Z6eLuMLVnZ2dnUVZ_sTinZ2d@speakeasy.net>
Kenny <·········@gmail.com> wrote:
+---------------
| ...so that only leaves one slot and it would be nice to show
| respect for women and have a babe in there so of course I am
| thinking Anna Kournikova... at some point tho I might veer 
| uncharacteristically towards sanity ...
+---------------

Let me in no wise discourage your general admiration of Anna K's
obvious qualities... ;-}  ;-}  ...but in this case may I be so bold
as to suggest that the following might be a substantially better
choice [unless you were to find an even more outstanding female
mathematician!]:

    http://en.wikipedia.org/wiki/Emmy_Noether
    Amalie Emmy Noether ... (March 23, 1882  April 14, 1935) was a
    German Jewish mathematician who is known for her seminal contributions
    to abstract algebra. Often described as the most important woman
    in the history of mathematics, she revolutionized the theories of
    rings, fields, and algebras. She is also known for her contributions
    to modern theoretical physics, especially for the first Noether's
    theorem which explains the connection between symmetry and
    conservation laws. ...

In the form used by physicists[1], Noether's theorem states:

    For every symmetry exhibited by a physical law, there
    is a corresponding observable quantity that is conserved.

E.g., if a physical law (or more loosely, physical system) behaves
the same regardless of when in the past or future you measure it
(temporal translation symmetry), then it must conserve energy.
If it behaves the same regardless of how it is positioned up, down,
or sidewise in space (translational symmetry), then it must conserve
linear momentum. And it behaves the same regardless of how it is
oriented in space (rotationally symmetric), then that law/system
must conserve angular momentum. Etc.[2]


-Rob

[1] Page 172 of:

      http://www.press.jhu.edu/books/title_pages/3474.html
      Deep Down Things
      The Breathtaking Beauty of Particle Physics
      by Bruce A. Schumm

    I only heard about Noether this week while reading this
    very enjoyable and (ironically!) relatively non-mathematical
    presentation of the Standard Model of particle physics.

[2] I say "Etc." since it's not limited to macro-scale or even
    "physical" quantities that are directly observable, but
    (as best I understand it) the only real prequisite for applying
    Noether's theorem is the invariance of the law in question
    under some -- any -- symmetry. Thus it naturally extends to
    such things as "internal symmetry spaces" such as the SU(3)
    Lie group used in the Standard Model to describe the strong
    nuclear force, with the result that (e.g.) total nuclear isospin
    must be conserved in any interaction [as must the strong color
    charge (modulo the weird way the "R/G/B"s can cancel each other),
    if I'm reading Schumm correctly]. That kind of thing. Fun stuff.  ;-}

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Paul Tarvydas
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <g3n64i$bk2$1@aioe.org>
Johannes Kepler.  Hands down winner.  His math proved that the Earth goes
around the Sun, and not v.v., overturning 2000 years of (incorrect)
Ptolemaic astronomy.  Kepler's math also led to Newton's Laws.

http://en.wikipedia.org/wiki/Johannes_Kepler

pt
From: Kenny
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485f97cd$0$7316$607ed4bc@cv.net>
Paul Tarvydas wrote:
> Johannes Kepler.  Hands down winner.  His math proved that the Earth goes
> around the Sun, and not v.v., overturning 2000 years of (incorrect)
> Ptolemaic astronomy.  Kepler's math also led to Newton's Laws.

And almost got him burned at the stake... or was that Newton? Galileo? 
Reminds me of the c.l.l lynch mob that strung me up over camelCase....

kt
From: Paul Tarvydas
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <g3o9v9$hvg$1@aioe.org>
Kenny wrote:

> Paul Tarvydas wrote:
>> Johannes Kepler.  Hands down winner.  His math proved that the Earth goes
>> around the Sun, and not v.v., overturning 2000 years of (incorrect)
>> Ptolemaic astronomy.  Kepler's math also led to Newton's Laws.
> 
> And almost got him burned at the stake... or was that Newton? Galileo?
> Reminds me of the c.l.l lynch mob that strung me up over camelCase....

You're probably thinking of Galileo, but that isn't close to the truth - he
was well-treated, esp. for someone who had stolen the claim - from a Jesuit
priest - for being the first to see the phases of Venus :-).

Kepler worked for Tycho Brahe.  Kepler had to sneak into Brahe's library
when Brahe was asleep, to steal astronomical data - Brahe had tried to make
sure that Kepler got to touch only a portion of the data, lest Kepler beat
Brahe to his own theory (which was doomed anyway).  Kepler finished his
work when Brahe died and access to the data loosened up.  Then, after
figuring out the details of elliptical orbits, Kepler continued on his
quest to prove his concentric spheres theory (which as doomed, esp. by the
elliptical orbit results).

(Arthur Koestler's book "Sleepwalkers" is an interesting read, full of
details about the personalities of these guys, unlike most history books).

pt
From: George Neuner
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <5c6u541v9bdpvn9aiprj75g6e6dka4r88q@4ax.com>
On Sun, 22 Jun 2008 21:11:16 -0400, Kenny <·········@gmail.com> wrote:

>Ok, so I am sitting here looking at this killer Algebra application now 
>that I just gave it a wicked makeover and I need medals for the kids, 
>gold, silver, and bronze depending on how they do one each mission (you 
>know, quiz). So I need to know the three greatest mathematicians, or 
>something like that.
>
>Candidates?
>
>I already have a great JPG of Einstein

Einstein, by his own admission, was not a terribly good mathematician.
His first wife, Mileva Maric', was much better and is suspected of
secretly doing much of the math for him.


> -- gotta go with the recognition 
>factor -- and I have another great image of a mathematician I pretty 
>much have to use (McCarthy) so that only leaves one slot and it would be 
>nice to show respect for women and have a babe in there 

Mileva Maric'?  Ada Lovelace?  Grace Hopper?

http://www.agnesscott.edu/lriddle/WOMEN/alpha.htm
Sad to say I recognize almost none of the names on this list.


>so of course I am thinking Anna Kournikova
>... at some point tho I might veer 
>uncharacteristically towards sanity and want a sorted list of the 
>greatest mathematicians, mebbe more than three, so... who are they?
>
>btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.

(sort (list 'Newton, 'Leibniz, 'Euler, 'Fermat, 'Wiles))


George
--
for email reply remove "/" from address
From: Kenny
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <485f9a19$0$7332$607ed4bc@cv.net>
George Neuner wrote:
> On Sun, 22 Jun 2008 21:11:16 -0400, Kenny <·········@gmail.com> wrote:
> 
>> Ok, so I am sitting here looking at this killer Algebra application now 
>> that I just gave it a wicked makeover and I need medals for the kids, 
>> gold, silver, and bronze depending on how they do one each mission (you 
>> know, quiz). So I need to know the three greatest mathematicians, or 
>> something like that.
>>
>> Candidates?
>>
>> I already have a great JPG of Einstein
> 
> Einstein, by his own admission, was not a terribly good mathematician.

And Anna never won a major.

:)

kt
From: Sohail Somani
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <41D7k.835$2G6.145@edtnps83>
Kenny wrote:
> Ok, so I am sitting here looking at this killer Algebra application now 
> that I just gave it a wicked makeover and I need medals for the kids, 
> gold, silver, and bronze depending on how they do one each mission (you 
> know, quiz). So I need to know the three greatest mathematicians, or 
> something like that.
> 
> Candidates?
> 
> I already have a great JPG of Einstein -- gotta go with the recognition 
> factor -- and I have another great image of a mathematician I pretty 
> much have to use (McCarthy) so that only leaves one slot and it would be 
> nice to show respect for women and have a babe in there so of course I 
> am thinking Anna Kournikova... at some point tho I might veer 
> uncharacteristically towards sanity and want a sorted list of the 
> greatest mathematicians, mebbe more than three, so... who are they?
> 
> btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.

Grace Hopper.
From: Edi Weitz
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <u7icgppux.fsf@agharta.de>
On Sun, 22 Jun 2008 21:11:16 -0400, Kenny <·········@gmail.com> wrote:

> Ok, so I am sitting here looking at this killer Algebra application
> now that I just gave it a wicked makeover and I need medals for the
> kids, gold, silver, and bronze depending on how they do one each
> mission (you know, quiz). So I need to know the three greatest
> mathematicians, or something like that.
>
> Candidates?

  http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss

Edi.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Daniel Janus
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <slrng5ulhk.grq.przesunmalpe@students.mimuw.edu.pl>
Edi Weitz <········@agharta.de>:

> On Sun, 22 Jun 2008 21:11:16 -0400, Kenny <·········@gmail.com> wrote:
>
>> Ok, so I am sitting here looking at this killer Algebra application
>> now that I just gave it a wicked makeover and I need medals for the
>> kids, gold, silver, and bronze depending on how they do one each
>> mission (you know, quiz). So I need to know the three greatest
>> mathematicians, or something like that.
>>
>> Candidates?
>
>   http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss

I'd add http://en.wikipedia.org/wiki/Srinivasa_Ramanujan .

-- 
Daniel 'Nathell' Janus, ······@nathell.korpus.pl, http://korpus.pl/~nathell
"Though a program be but three lines long, someday it will have to be
maintained."
   -- The Tao of Programming 
From: Vassil Nikolov
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <snz7icfr0c7.fsf@luna.vassil.nikolov.name>
On Mon, 23 Jun 2008 09:33:42 +0200, Edi Weitz <········@agharta.de> said:

| On Sun, 22 Jun 2008 21:11:16 -0400, Kenny <·········@gmail.com> wrote:
|...
|| need to know the three greatest
|| mathematicians, or something like that.
|| 
|| Candidates?

|   http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss

  Yet another proof that he is among the worthiest is this
  "traditional" definition of mathematician by induction:

  1. Gauss is a mathematician.
  2. X is a mathematician if declared one by a mathematician.
  3. There are no other mathematicians.

  ---Vassil.


-- 
Peius melius est.  ---Ricardus Gabriel.
From: jayessay
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <m3abhbvksw.fsf@sirius.goldenthreadtech.com>
Kenny <·········@gmail.com> writes:

> Ok, so I am sitting here looking at this killer Algebra application
> now that I just gave it a wicked makeover and I need medals for the
> kids, gold, silver, and bronze depending on how they do one each
> mission (you know, quiz). So I need to know the three greatest
> mathematicians, or something like that.
> 
> Candidates?

As a mathematician, I'd say:


1. Gauss

2. Archimedes

3. Newton.


I think most mathematicians would buy into that.  I'd be quite
surprised if any of them did not think Gauss was the undisputed #1.


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Tim X
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <87k5ggwnkq.fsf@lion.rapttech.com.au>
Kenny <·········@gmail.com> writes:

> Ok, so I am sitting here looking at this killer Algebra application now
> that I just gave it a wicked makeover and I need medals for the kids, gold,
> silver, and bronze depending on how they do one each mission (you know,
> quiz). So I need to know the three greatest mathematicians, or something
> like that.
>
> Candidates?
>
> I already have a great JPG of Einstein -- gotta go with the recognition
> factor -- and I have another great image of a mathematician I pretty much
> have to use (McCarthy) so that only leaves one slot and it would be nice to
> show respect for women and have a babe in there so of course I am thinking
> Anna Kournikova... at some point tho I might veer uncharacteristically
> towards sanity and want a sorted list of the greatest mathematicians, mebbe
> more than three, so... who are they?
>
> btw, I am thinking McCarthy is the gold -- Lisp beats E=m^2/c by a mile.
>
> kt

For someone who was extremely prolific in publishing papers on
mahematics and who covered a remarkably diverse range of problems, my
vote would be for Paul Erdős.

Tim

-- 
tcross (at) rapttech dot com dot au
From: Aatu Koskensilta
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <82vdzqhoir.fsf@A166.veli3.tontut.fi>
Kenny <·········@gmail.com> writes:

> So I need to know the three greatest mathematicians, or something
> like that.

On what mathematical accomplishment of Einstein's or McCarthy's do you
base their inclusion among the three greatest mathematicians?

-- 
Aatu Koskensilta (················@uta.fi)

"Wovon man nicht sprechen kann, dar�ber muss man schweigen"
 - Ludwig Wittgenstein, Tractatus Logico-Philosophics
From: Kenny
Subject: Re: Help Kenny! (Exit Yobbos)
Date: 
Message-ID: <48690c27$0$5022$607ed4bc@cv.net>
Aatu Koskensilta wrote:
> Kenny <·········@gmail.com> writes:
> 
>> So I need to know the three greatest mathematicians, or something
>> like that.
> 
> On what mathematical accomplishment of Einstein's or McCarthy's do you
> base their inclusion among the three greatest mathematicians?
> 

It tickles me no end that this objection was not applied to Anna 
Kournikova. But you are confused. Go reread the OP. I said Al and John 
would get medals for reasons like name recognition, great JPEG 
availability, or for having invented Lisp in the same post that I said I 
needed to know (see, that means I am asking, not listing) for the three 
greatest mathematicians.

The juxtaposition fooled you, I think, which is understandable but 
woefully illogical, imprecise, and sloppy so there goes your shot at a 
medal. :)

Btw, both Einstein and McCarthy did rather amazing things with 
mathematics so given my track record on applications vs theory and 
working programmers vs. academics...well, who was the greatest 
baseballer, Babe Ruth or Abner Doubleday?

QfED

kt
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <0a8u54d2isvvvjn7pcmk7gss8issdfvv5u@4ax.com>
On Sun, 22 Jun 2008 19:14:32 +0100, Jon Harrop <···@ffconsultancy.com>
wrote:

>George Neuner wrote:
>> On Sun, 22 Jun 2008 16:29:59 +0100, Jon Harrop <···@ffconsultancy.com>
>> wrote:
>>>Patrick May wrote:
>>>> I'd point you to several high performance regular expression
>>>> libraries...
>>>
>>>This is about dispatch over algebraic datatypes so regular expressions are
>>>irrelevant.
>> 
>> Since Lisp doesn't have algebraic datatypes, you need to compare the
>> closest equivalent - generic function dispatch.
>
>Pascal Costanza's Lisp implementation of the symbolic simplifier used Lisp's
>generic function dispatch as a poor man's pattern matcher:
>
>  http://www.lambdassociates.org/studies/study10.htm
>
>> Not only is that heavily optimized in most Lisps,
>
>Pascal's code is several times slower than the OCaml.

How did you time it?

Lisp programs can be changed at runtime, so a generic function's
method dispatch code is normally generated at runtime at the first
call to the function.  To exclude code generation from timing, you
need to make a throwaway pass through the algorithm to generate all
the dispatch codes and then enter your timing loop.

I'll bet you didn't do that.

Some Lisp compilers will generate the dispatch code at compile time in
exchange for a promise that the program will not modify the method set
at runtime.  That promise is generally not enforced though, so if the
program then does change the method set at runtime, the function's
dispatch code will have to be discarded and regenerated.


>> but it directly corresponds to ML's pattern matching alternation.
>
>Actually Pascal's code demonstrated one of the ways that generic function
>dispatch is less powerful. Specifically, this OCaml:
>
>   | `Int (Int 0), e | e, `Int (Int 0) -> e 
>
>becomes this Lisp:
> 
>    (:method ((x (eql 0)) y) y)
>    (:method (x (y (eql 0))) x)
>
>because generic function dispatch cannot handle or-patterns. 

Yes it does - Those two methods together *are* an or-pattern, you just
can't see the dispatch for the syntax.

The generic function is not visible to the programmer - it is
generated by the compiler from the argument patterns of its instance
methods.

What you don't seem to understand is that from the source: 

  (defgeneric f (x y))
  (defmethod  f ((x (eql 0)) y) y)
  (defmethod  f (x (y (eql 0))) x)

the compiler will silently generate something like:

  (defun f-1 (x y) y)
  (defun f-2 (x y) x)

  (defun f (x y) 
    (cond ((eql x 0) (f-1 x y))
          ((eql y 0) (f-2 x y))
          (t (error 'no matching method)) 
    ))

and then, depending on optimization levels, f-1 and f-2 may be
in-lined and the whole thing reduced to a single expression with no
nested calls.

The major difference from Ocaml's pattern matching is the need to
explicitly name the pattern variables because they are simultaneously
formal arguments to the methods.


>
>  :
>
>Use more nesting (which is very common when pattern matching is available)
>and gap between OCaml and Lisp widens very quickly.

Just nested calls to other generic functions.


George
--
for email reply remove "/" from address
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <Wb6dnXT2m_gMEMLVRVnyuAA@posted.plusnet>
George Neuner wrote:
> On Sun, 22 Jun 2008 19:14:32 +0100, Jon Harrop <···@ffconsultancy.com>
> wrote:
>>Pascal's code is several times slower than the OCaml.
> 
> How did you time it?

I was quoting Mark Tarver's results from the webpage I cited.

> Lisp programs can be changed at runtime, so a generic function's
> method dispatch code is normally generated at runtime at the first
> call to the function.  To exclude code generation from timing, you
> need to make a throwaway pass through the algorithm to generate all
> the dispatch codes and then enter your timing loop.
> 
> I'll bet you didn't do that.
> 
> Some Lisp compilers will generate the dispatch code at compile time in
> exchange for a promise that the program will not modify the method set
> at runtime.  That promise is generally not enforced though, so if the
> program then does change the method set at runtime, the function's
> dispatch code will have to be discarded and regenerated.

I still have the original programs on my machine here. Rerunning the
benchmarks on a 12Mb randomly-generated symbolic expression three times
without recompilation on a 2.2GHz Athlon64 running 64-bit Debian I get
similar results to Mark's:

OCaml:         0.288s 0.284s 0.285s  ocamlopt 3.10.2
Pascal's Lisp: 0.812s 1.154s 0.925s  sbcl 1.0.16

That is for the unoptimized OCaml. The optimized OCaml takes only 0.16s.

>>> but it directly corresponds to ML's pattern matching alternation.
>>
>>Actually Pascal's code demonstrated one of the ways that generic function
>>dispatch is less powerful. Specifically, this OCaml:
>>
>>   | `Int (Int 0), e | e, `Int (Int 0) -> e
>>
>>becomes this Lisp:
>> 
>>    (:method ((x (eql 0)) y) y)
>>    (:method (x (y (eql 0))) x)
>>
>>because generic function dispatch cannot handle or-patterns.
> 
> Yes it does - Those two methods together *are* an or-pattern, you just
> can't see the dispatch for the syntax.

No, it doesn't. In that case a single or-pattern was expanded into two
separate generic methods. In general, this is a combinatoric code explosion
as the patterns get more complicated.

For example, this single pattern in OCaml:

  | (2|3|5 as x), (3|5|7 as y) -> x + y

has the obvious translation into Lisp (following Pascal's lead) as:

  (defmethod  f ((x (eql 2)) (y (eql 3))) (+ x y))
  (defmethod  f ((x (eql 2)) (y (eql 5))) (+ x y))
  (defmethod  f ((x (eql 2)) (y (eql 7))) (+ x y))
  (defmethod  f ((x (eql 3)) (y (eql 3))) (+ x y))
  (defmethod  f ((x (eql 3)) (y (eql 5))) (+ x y))
  (defmethod  f ((x (eql 3)) (y (eql 7))) (+ x y))
  (defmethod  f ((x (eql 5)) (y (eql 3))) (+ x y))
  (defmethod  f ((x (eql 5)) (y (eql 5))) (+ x y))
  (defmethod  f ((x (eql 5)) (y (eql 7))) (+ x y))

To keep the Lisp code under control you must factor it. You can automate the
factoring of the Lisp code, at which point you are Greenspunning a pattern
match compiler.

> The generic function is not visible to the programmer...

The Lisp programmer has to write and maintain more code.

> What you don't seem to understand is that from the source:
> 
>   (defgeneric f (x y))
>   (defmethod  f ((x (eql 0)) y) y)
>   (defmethod  f (x (y (eql 0))) x)
> 
> the compiler will silently generate something like:
> 
>   (defun f-1 (x y) y)
>   (defun f-2 (x y) x)
> 
>   (defun f (x y)
>     (cond ((eql x 0) (f-1 x y))
>           ((eql y 0) (f-2 x y))
>           (t (error 'no matching method))
>     ))
> 
> and then, depending on optimization levels, f-1 and f-2 may be
> in-lined and the whole thing reduced to a single expression with no
> nested calls.

You are talking about nested function calls. I was talking about nested
or-patterns. They are completely different.

> The major difference from Ocaml's pattern matching is the need to
> explicitly name the pattern variables because they are simultaneously
> formal arguments to the methods.

There are more important differences than that:

. OCaml uses a single pattern to represent many permutations but Lisp's
generic methods cannot express that so the Lisp programmer has to manage a
combinatoric code explosion by hand.

. The target expression appears only once with or-patterns but multiple
times with generic methods in the Lisp. The Lisp programmer must factor that
by hand as well.

. The pattern match compiler performs decision tree optimizations on
dispatch over nested patterns. Lisp's generic methods can only represent a
flat dispatch table so they do not facilitate this optimization and are
much slower as a consequence.

. Pattern matching over algebraic datatypes can automatically warn of
inexhaustive or redundant match cases.

>>Use more nesting (which is very common when pattern matching is available)
>>and gap between OCaml and Lisp widens very quickly.
> 
> Just nested calls to other generic functions.

Exactly. That complexity is not present with the pattern matching.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Pascal Costanza
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <6c9lvcF3crf8vU1@mid.individual.net>
Jon Harrop wrote:

> For example, this single pattern in OCaml:
> 
>   | (2|3|5 as x), (3|5|7 as y) -> x + y
> 
> has the obvious translation into Lisp (following Pascal's lead) as:
> 
>   (defmethod  f ((x (eql 2)) (y (eql 3))) (+ x y))
>   (defmethod  f ((x (eql 2)) (y (eql 5))) (+ x y))
>   (defmethod  f ((x (eql 2)) (y (eql 7))) (+ x y))
>   (defmethod  f ((x (eql 3)) (y (eql 3))) (+ x y))
>   (defmethod  f ((x (eql 3)) (y (eql 5))) (+ x y))
>   (defmethod  f ((x (eql 3)) (y (eql 7))) (+ x y))
>   (defmethod  f ((x (eql 5)) (y (eql 3))) (+ x y))
>   (defmethod  f ((x (eql 5)) (y (eql 5))) (+ x y))
>   (defmethod  f ((x (eql 5)) (y (eql 7))) (+ x y))

LOL

 From http://www.research.att.com/~bs/bs_faq.html#compare :

"Several reviewers asked me to compare C++ to other languages. This I 
have decided against doing. Thereby, I have reaffirmed a long-standing 
and strongly held view: Language comparisons are rarely meaningful and 
even less often fair. A good comparison of major programming languages 
requires more effort than most people are willing to spend, experience 
in a wide range of application areas, a rigid maintenance of a detached 
and impartial point of view, and a sense of fairness. I do not have the 
time, and as the designer of C++, my impartiality would never be fully 
credible.

I also worry about a phenomenon I have repeatedly observed in honest 
attempts at language comparisons. The authors try hard to be impartial, 
but are hopelessly biased by focusing on a single application, a single 
style of programming, or a single culture among programmers. Worse, when 
one language is significantly better known than others, a subtle shift 
in perspective occurs: Flaws in the well-known language are deemed minor 
and simple workarounds are presented, whereas similar flaws in other 
languages are deemed fundamental. Often, the workarounds commonly used 
in the less-well-known languages are simply unknown to the people doing 
the comparison or deemed unsatisfactory because they would be unworkable 
in the more familiar language.

Similarly, information about the well-known language tends to be 
completely up-to-date, whereas for the less-known language, the authors 
rely on several-year-old information. For languages that are worth 
comparing, a comparison of language X as defined three years ago vs. 
language Y as it appears in the latest experimental implementation is 
neither fair nor informative. Thus, I restrict my comments about 
languages other than C++ to generalities and to very specific comments."


Enough said.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <7YqdnWzuCp6KL8LVnZ2dneKdnZydnZ2d@plusnet>
Pascal Costanza wrote:
> ...
> Enough said.

That was a very long-winded way of saying "ignorance is bliss".

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Pascal J. Bourguignon
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <7cod5snr26.fsf@pbourguignon.anevia.com>
Pascal Costanza <··@p-cos.net> writes:

> Jon Harrop wrote:
>
>> For example, this single pattern in OCaml:
>>   | (2|3|5 as x), (3|5|7 as y) -> x + y
>> has the obvious translation into Lisp (following Pascal's lead) as:
>>   (defmethod  f ((x (eql 2)) (y (eql 3))) (+ x y))
>>   (defmethod  f ((x (eql 2)) (y (eql 5))) (+ x y))
>>   (defmethod  f ((x (eql 2)) (y (eql 7))) (+ x y))
>>   (defmethod  f ((x (eql 3)) (y (eql 3))) (+ x y))
>>   (defmethod  f ((x (eql 3)) (y (eql 5))) (+ x y))
>>   (defmethod  f ((x (eql 3)) (y (eql 7))) (+ x y))
>>   (defmethod  f ((x (eql 5)) (y (eql 3))) (+ x y))
>>   (defmethod  f ((x (eql 5)) (y (eql 5))) (+ x y))
>>   (defmethod  f ((x (eql 5)) (y (eql 7))) (+ x y))
>
> LOL

Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
something better: smooth and easy meta-programming.  


> From http://www.research.att.com/~bs/bs_faq.html#compare :
>
> "Several reviewers asked me to compare C++ to other languages. This I
> have decided against doing. Thereby, I have reaffirmed a long-standing
> and strongly held view: Language comparisons are rarely meaningful and
> even less often fair. A good comparison of major programming languages
> requires more effort than most people are willing to spend, experience
> in a wide range of application areas, a rigid maintenance of a
> detached and impartial point of view, and a sense of fairness. I do
> not have the time, and as the designer of C++, my impartiality would
> never be fully credible.
>
> I also worry about a phenomenon I have repeatedly observed in honest
> attempts at language comparisons. The authors try hard to be
> impartial, but are hopelessly biased by focusing on a single
> application, a single style of programming, or a single culture among
> programmers. Worse, when one language is significantly better known
> than others, a subtle shift in perspective occurs: Flaws in the
> well-known language are deemed minor and simple workarounds are
> presented, whereas similar flaws in other languages are deemed
> fundamental. Often, the workarounds commonly used in the
> less-well-known languages are simply unknown to the people doing the
> comparison or deemed unsatisfactory because they would be unworkable
> in the more familiar language.
>
> Similarly, information about the well-known language tends to be
> completely up-to-date, whereas for the less-known language, the
> authors rely on several-year-old information. For languages that are
> worth comparing, a comparison of language X as defined three years ago
> vs. language Y as it appears in the latest experimental implementation
> is neither fair nor informative. Thus, I restrict my comments about
> languages other than C++ to generalities and to very specific
> comments."

Well not OK.  There are languages that are worse than others, and C++
is amongst them.  Of course, there's no need to discuss hours whether
C++ is worser than brainfuck� or vice-versa, but any programmer can
make up his own classification easily.


> Enough said.



�) said after a day fighting with templates to implement something
   done in two lines of lisp.

-- 
__Pascal Bourguignon__
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <tOudnVhwjIbdesLVnZ2dnUVZ8vmdnZ2d@posted.plusnet>
Pascal J. Bourguignon wrote:
> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
> something better: smooth and easy meta-programming.

Mathematica has both pattern matching and easy metaprogramming. Could a next
generation Lisp also bundle both?

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Kenny
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <485ffb8f$0$11642$607ed4bc@cv.net>
Jon Harrop wrote:
> Pascal J. Bourguignon wrote:
>> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
>> something better: smooth and easy meta-programming.
> 
> Mathematica has both pattern matching and easy metaprogramming. Could a next
> generation Lisp also bundle both?
> 

What are your thoughts on Qi?

kt
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <0aydnaiGAcBiiP3V4p2dnAA@posted.plusnet>
Kenny wrote:
> Jon Harrop wrote:
>> Pascal J. Bourguignon wrote:
>>> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
>>> something better: smooth and easy meta-programming.
>> 
>> Mathematica has both pattern matching and easy metaprogramming. Could a
>> next generation Lisp also bundle both?
> 
> What are your thoughts on Qi?

From a purely technical perspective, Qi looks like an interesting
development that tries to combine the best of the statically typed FPLs
with the best of the dynamic ones.

From an overall perspective, I think Qi was always doomed to failure due to
market issues. Specifically, it is dwarfed by statically-typed FPLs like
OCaml and F# on one side and dynamic languages like Lisp and Erlang on the
other side. All of these competitors have far more tutorial information out
there, far better tools, far more libraries and are far more mature. Qi
would have to offer something amazing just to become visible but it
doesn't.

Right now, I think multicore computing is widely recognised to be of
critical importance. So languages that can stake a claim on parallelism are
seeing a lot of lime light. Erlang is an obvious example of a programming
language seeing a lot of hype because of this (although it is almost
entirely unfounded).

Despite the early warnings about multicore computing, the open source world
has managed to start off wrong-footed in this context: with a complete lack
of multicore-capable tools except for the JVM which essentially only has
sucky languages available for it because it lacks basic common-language
features like tail calls.

I think Microsoft have positioned themselves extremely well in this respect,
by making the .NET platform multicore capable and multi programming
language capable and then adding their Task Parallel Library to make
multicore computing easy through the use of higher order functions (!).
This is one of the main reasons why we are now placing our bets on F# as a
company. If the open source world created anything competitive then we
would certainly diversify but, to be honest, I think we've reached a point
in history where foundations like decent language implementations cannot
feasibly be written by the open source community.

So my advice to budding language developers trying to develop a useful
language is to target the JVM instead of building upon Lisp. Unfortunately,
that will not really be feasible for functional languages until Sun finally
get around to implementing tail calls. The second best option would be to
build upon LLVM and write your own concurrent GC but I believe that is
prohibitively difficult. If you're happy to restrict yourself to Windows
then target the CLR but, of course, you'll be going head to head with F#.

Now that I think about it, the pattern matching and the metaprogramming in
Mathematica are very much secondary to the graphical development
environment. So perhaps there is no merit in trying to build a language
combining such features anyway.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Kenny
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <48604cd0$0$11641$607ed4bc@cv.net>
Jon Harrop wrote:
> Kenny wrote:
>> Jon Harrop wrote:
>>> Pascal J. Bourguignon wrote:
>>>> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
>>>> something better: smooth and easy meta-programming.
>>> Mathematica has both pattern matching and easy metaprogramming. Could a
>>> next generation Lisp also bundle both?
>> What are your thoughts on Qi?
> 
> From a purely technical perspective, Qi looks like an interesting
> development that tries to combine the best of the statically typed FPLs
> with the best of the dynamic ones.
> 
> From an overall perspective, I think Qi was always doomed to failure due to
> market issues. Specifically, it is dwarfed by statically-typed FPLs like
> OCaml and F# on one side and dynamic languages like Lisp and Erlang on the
> other side. All of these competitors have far more tutorial information out
> there, far better tools, far more libraries and are far more mature. Qi
> would have to offer something amazing just to become visible but it
> doesn't.

Oh. I thought it did the pattern thing. And that that was 
vital/crucial/amazing. Confused.

> 
> Right now, I think multicore computing is widely recognised to be of
> critical importance. So languages that can stake a claim on parallelism are
> seeing a lot of lime light. Erlang is an obvious example of a programming
> language seeing a lot of hype because of this (although it is almost
> entirely unfounded).
> 
> Despite the early warnings about multicore computing, the open source world
> has managed to start off wrong-footed in this context: with a complete lack
> of multicore-capable tools except for the JVM which essentially only has
> sucky languages available for it because it lacks basic common-language
> features like tail calls.
> 
> I think Microsoft have positioned themselves extremely well in this respect,
> by making the .NET platform multicore capable and multi programming
> language capable and then adding their Task Parallel Library to make
> multicore computing easy through the use of higher order functions (!).
> This is one of the main reasons why we are now placing our bets on F# as a
> company.

Isn't .net proprietary? I do not think any serious technologist is 
interested in running only on Windows.

It might be a good decision for a business but not one is in an oddball 
fringe area like pattern-matching static FPL in which case one 
desperately needs to be portable just to reach the only market you have: 
nerds.

I killed over a year porting my Algebra software from the Mac to the PC 
only to discover that (back then) PC users only ran software they could 
"borrow" from work. The PC (95% market share) became 20% of my sales. 
Not worth that year when I could have rolled out a second title.

You may want to move your chips off 13 onto red or black.

kt
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <il6264pvr15eh78o3he6viuq9k2d6a7nc8@4ax.com>
On Mon, 23 Jun 2008 21:24:32 -0400, Kenny <·········@gmail.com> wrote:

>Isn't .net proprietary? I do not think any serious technologist is 
>interested in running only on Windows.

.NET is an open design.  The Mono project is devoted to bringing .NET
to other platforms.

http://www.mono-project.com/Main_Page


I think Jon is right that .NET is a better platform than JVM.

George
--
for email reply remove "/" from address
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <A9udnV7CBt46QP3VnZ2dneKdnZydnZ2d@posted.plusnet>
Kenny wrote:
> Isn't .net proprietary?

Yes.

> I do not think any serious technologist is interested in running only on
> Windows. 

Two years ago I would have agreed. One year ago, we started to ship our
first Windows-only products. Now, our Windows-only products already account
for 60% of our profits from sales.

> It might be a good decision for a business but not one is in an oddball
> fringe area like pattern-matching static FPL in which case one
> desperately needs to be portable just to reach the only market you have:
> nerds.

Migrating from other platforms to Windows last year was the best thing we
ever did as far as profits are concerned.

> I killed over a year porting my Algebra software from the Mac to the PC
> only to discover that (back then) PC users only ran software they could
> "borrow" from work. The PC (95% market share) became 20% of my sales.
> Not worth that year when I could have rolled out a second title.
>
> You may want to move your chips off 13 onto red or black.

The odds are now stacked overwhelmingly in favor of the Windows platform.
Not only does Windows have the lions share of the target market but it has
the richest slice of the target market (poor people use Linux). Combine
that with the fact that Microsoft's developer tools (.NET and F#) now blow
everything else away as far as programmer productivity is concerned and you
have an incredibly strong argument for focussing on Windows-only products.

For example, the new 3D support in F# for Visualization took me only four
days to implement in F# using Windows Presentation Foundation and it runs
reliably on a huge number of computers:

  http://www.ffconsultancy.com/products/fsharp_for_visualization/demo2.html

Productivity like that lets us compete against much bigger players (e.g.
Mathematica) precisely because they are forgoing the best tools in order to
support other platforms even though those platforms make up a fraction of
their revenue.

As far as business is concerned, I am convinced that migrating to Windows is
the right thing to do.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Kenny
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <4860e976$0$11632$607ed4bc@cv.net>
Jon Harrop wrote:
> Kenny wrote:
>> Isn't .net proprietary?
> 
> Yes.
> 
>> I do not think any serious technologist is interested in running only on
>> Windows. 
> 
> Two years ago I would have agreed. One year ago, we started to ship our
> first Windows-only products. Now, our Windows-only products already account
> for 60% of our profits from sales.
> 
>> It might be a good decision for a business but not one is in an oddball
>> fringe area like pattern-matching static FPL in which case one
>> desperately needs to be portable just to reach the only market you have:
>> nerds.
> 
> Migrating from other platforms to Windows last year was the best thing we
> ever did as far as profits are concerned.
> 
>> I killed over a year porting my Algebra software from the Mac to the PC
>> only to discover that (back then) PC users only ran software they could
>> "borrow" from work. The PC (95% market share) became 20% of my sales.
>> Not worth that year when I could have rolled out a second title.
>>
>> You may want to move your chips off 13 onto red or black.
> 
> The odds are now stacked overwhelmingly in favor of the Windows platform.

Language like that (which you use a lot) makes you sound insecure in 
your argument. You throw in something like "overwhelmingly" and end up 
with the opposite effect: a more dubious reader.

Meanwhile, the same kind of popularity argument works in favor of C#, 
not F#. This gets back to my point that /your/ fringe market is probably 
using Linux or OS X, not .NET.

> Not only does Windows have the lions share of the target market but it has
> the richest slice of the target market (poor people use Linux).

The burden remains on justifying lock-in to one platform if no lock-in 
is necessary.

> Combine
> that with the fact that Microsoft's developer tools (.NET and F#) now blow
> everything else away as far as programmer productivity is concerned and you
> have an incredibly strong argument for focussing on Windows-only products.
> 
> For example, the new 3D support in F# for Visualization took me only four
> days to implement in F# using Windows Presentation Foundation...

Four days is a lot. I can interface to any C/C++ library in four days, 
and I do not even have pattern matching (or .net interlanguage 
compatibility) to leverage.

Meanwhile, this whole ooohhh look at all the libraries thing reminds me 
of Java or even Pythonistas. Libraries off the shelf are great, but if 
one is not just doing the moral equivalent of scripting then the arrow 
shifts the other way. OTS libs save me a linear, fixed cost of building 
an FFI layer to another library, but not using Lisp is a non-linear 
quagmire once the code gets interesting.

  and it runs
> reliably on a huge number of computers:

See that word "huge"? We can almost see you sweating.

> 
>   http://www.ffconsultancy.com/products/fsharp_for_visualization/demo2.html
> 

If it only took six lines I do not see why I need F# -- it is not doing 
anything, it is just playing a scripting role. That page needs to show 
off F#, not the 3D library from Microsoft. As it is you are just selling C#.

> Productivity like that lets us compete against much bigger players (e.g.
> Mathematica) precisely because they are forgoing the best tools in order to
> support other platforms even though those platforms make up a fraction of
> their revenue.
> 
> As far as business is concerned, I am convinced that migrating to Windows is
> the right thing to do.
> 

But here you are on a Lisp forum spinning your wheels before an 
unpromising audience and going out of your way to alienate them vs. 
building interest in your wares. Happy successful people do not act that 
way.

hth,kenny
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <M7SdnWftA6nMaf3VnZ2dnUVZ8uednZ2d@posted.plusnet>
Kenny wrote:
> Meanwhile, the same kind of popularity argument works in favor of C#,
> not F#.

My argument was based on profits and not popularity.

> This gets back to my point that /your/ fringe market is probably 
> using Linux or OS X, not .NET.

Then why are the sales of our .NET products so much higher?

>> Not only does Windows have the lions share of the target market but it
>> has the richest slice of the target market (poor people use Linux).
> 
> The burden remains on justifying lock-in to one platform if no lock-in
> is necessary.

If there were anything comparable to F# and .NET outside Windows there might
be a choice but there isn't.

>> For example, the new 3D support in F# for Visualization took me only four
>> days to implement in F# using Windows Presentation Foundation...
> 
> Four days is a lot. I can interface to any C/C++ library in four days,
> and I do not even have pattern matching (or .net interlanguage
> compatibility) to leverage.

Compare F# for Visualization with the plotting routines in Maxima, for
example.

> Meanwhile, this whole ooohhh look at all the libraries thing reminds me
> of Java or even Pythonistas. Libraries off the shelf are great, but if
> one is not just doing the moral equivalent of scripting then the arrow
> shifts the other way. OTS libs save me a linear, fixed cost of building
> an FFI layer to another library, but not using Lisp is a non-linear
> quagmire once the code gets interesting.

You're explaining why Lisp is a success despite the fact that it continues
to be a resounding failure. While you're fiddling with FFI layers, .NET
programmers are calling libraries in other languages directly thanks to a
common language run-time. While you're Greenspunning basic GUI
libraries, .NET users are building upon integrated 2D and 3D hardware
accelerated vector graphics that is known to be extremely reliable because
it is now baked into the OS.

>> and it runs reliably on a huge number of computers:
> 
> See that word "huge"? We can almost see you sweating.

I was sweating. We canned an entire OCaml-based product line after months of
wasted effort because it did not run reliably on customer's machines.
Reliability is (ironically) probably the single most important advantage of
Windows for us. F# for Visualization runs reliably because Microsoft put
the core functionality in their OSs and tested it for us.

>>  
http://www.ffconsultancy.com/products/fsharp_for_visualization/demo2.html
>> 
> 
> If it only took six lines I do not see why I need F# -- it is not doing 
> anything, it is just playing a scripting role.

Exactly, yes. Users write tiny amounts of code to get the job done.

> That page needs to show off F#, not the 3D library from Microsoft.

That page shows off the 3D library from *us*.

>> Productivity like that lets us compete against much bigger players (e.g.
>> Mathematica) precisely because they are forgoing the best tools in order
>> to support other platforms even though those platforms make up a fraction
>> of their revenue.
>> 
>> As far as business is concerned, I am convinced that migrating to Windows
>> is the right thing to do.
> 
> But here you are on a Lisp forum spinning your wheels before an
> unpromising audience and going out of your way to alienate them vs.
> building interest in your wares. Happy successful people do not act that
> way.

Does Lisp not run on Windows? If Lisp had a decent .NET backend it might be
worth considering as a tool for writing commercial software. Until then, my
bets are on F#.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.uc89pn0yut4oq5@pandora.alfanett.no>
P� Tue, 24 Jun 2008 15:16:11 +0200, skrev Jon Harrop  
<···@ffconsultancy.com>:

>
> You're explaining why Lisp is a success despite the fact that it  
> continues
> to be a resounding failure. While you're fiddling with FFI layers, .NET
> programmers are calling libraries in other languages directly thanks to a
> common language run-time. While you're Greenspunning basic GUI
> libraries, .NET users are building upon integrated 2D and 3D hardware
> accelerated vector graphics that is known to be extremely reliable  
> because
> it is now baked into the OS.
>

The .NET library is accesible from any language including Lisp. (RDNZL  
library)
In fact most .NET users program in C++ ...

--------------
John Thingstad
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <6YydnZQuiYJUlPzVnZ2dnUVZ8q_inZ2d@posted.plusnet>
John Thingstad wrote:
> The .NET library is accesible from any language including Lisp.

With a catastrophic loss of productivity and reliability. That is useless if
you want to ship product.

> (RDNZL library)

That is a non-native beta release library lashed together by Edi that has no
users.

> In fact most .NET users program in C++ ...

I don't believe that for a second.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <r07264l9qlu59t6vjvjp0lltqtq04e8mjc@4ax.com>
On Tue, 24 Jun 2008 15:47:50 +0100, Jon Harrop <···@ffconsultancy.com>
wrote:

>John Thingstad wrote:
>
>> In fact most .NET users program in C++ ...
>
>I don't believe that for a second.

Then you better wake up and smell the coffee.  

Microsoft's own surveys have shown that the majority of .NET
developers are using managed C++ to add new features to legacy
products.  After C++, the next most popular language is VB.NET and
then C#.  F# doesn't have enough users to even register.

Companies with existing code bases can't simply change tool chains and
platforms whenever something new and shiny comes along.  While there
may be ROI justification to convert to something better for new
projects, there is rarely justification to change for maintenance of
legacy code.

George
--
for email reply remove "/" from address
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <8d22b535-cc0f-4eae-92b3-0426cd6279d7@26g2000hsk.googlegroups.com>
On 24 Jun, 17:29, George Neuner <·········@/comcast.net> wrote:
> On Tue, 24 Jun 2008 15:47:50 +0100, Jon Harrop <····@ffconsultancy.com>
> wrote:
> >John Thingstad wrote:
> >> In fact most .NET users program in C++ ...
>
> >I don't believe that for a second.
>
> Then you better wake up and smell the coffee.  
>
> Microsoft's own surveys have shown that the majority of .NET
> developers are using managed C++ to add new features to legacy
> products. After C++, the next most popular language is VB.NET and
> then C#.  F# doesn't have enough users to even register.
>
> Companies with existing code bases can't simply change tool chains and
> platforms whenever something new and shiny comes along.  While there
> may be ROI justification to convert to something better for new
> projects, there is rarely justification to change for maintenance of
> legacy code.

Google Trends indicates that world interest in C# overtook that of C++
(both on and off .NET) in Q2 2007:

http://www.google.com/trends?q=c%2B%2B%2Cc%23

According to O'Reilly, C# books started outselling C++ books in 2007:

http://radar.oreilly.com/archives/2008/03/state-of-the-computer-book-mar-22.html

So what gives you the impression that .NET has more C++ users than C#
users?

Cheers,
Jon.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008jul05-002@yahoo.com>
> Date: Tue, 24 Jun 2008 14:16:11 +0100
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: Jon Harrop <····@ffconsultancy.com>
> Users write tiny amounts of code to get the job done.

That works only if somebody has already written the essential parts
of the program and all the "application programmer" need to is
write a script to feed it parameters. Apparently you never do any
programming of new algorithms.

It's been said that "software engineering" simply means piecing
together major applications that somebody else already wrote, while
"programming" involves actually writing new algorithms and
associated applications from scratch using only general tools for
various datatypes and generally-useful algorithms. If that's true,
then "software engineering" is basically trivium compared to
"programming". All you care about is "software engineering",
whereas I do R&D to develop new algorithms that nobody else even
thought of before.

By the way: Here's a tool I wrote more than ten years ago:
SEGMAT (Segmat Match)
Given two strings, the "correct" string, and another string
(usually something somebody typed in manually), find the longest
matching substring, and among the pieces that remain the next
longest matching substring, etc. until all nonoverlapping matching
strings of two or more characters have been found.
Now please tell me which of your favorite programming languages
have that complete algorithm available in the library, and which
would require a programmer to write it (as I did in Lisp).
For the languages that don't have it built-in, how long would it
take you to write it, and how efficient would it be?

> If Lisp had a decent .NET backend it might be worth considering
> as a tool for writing commercial software.

What makes you think .NET is actually good in the first place?
From: Mark Tarver
Subject: some misconceptions
Date: 
Message-ID: <e567f86d-c97a-44c7-b59a-ce6d379dfbf1@z66g2000hsc.googlegroups.com>
From Jon Harrop

QUOTE
From an overall perspective, I think Qi was always doomed to failure
due to market issues. Specifically, it is dwarfed by statically-typed
FPLs like OCaml and F# on one side and dynamic languages like Lisp and
Erlang on the other side. All of these competitors have far more
tutorial information out there, far better tools, far more libraries
and are far more mature. Qi would have to offer something amazing just
to become visible but it doesn't.
UNQUOTE

Some misconceptions here.

1.  The range of available libraries for Qi is exactly the same as
that for CL i.e. vast.  Qi is designed and implemented for CL.

2.  The whole Qi system is documented in > 300 pages including
correctness proofs.  There's no shortage of material to consult.  If
you get stuck there are 112 members of the Qi group to call upon.

3.  Qi since Jan 2007 has had over 2000 downloads, and Qilang has
about 2/3 of the web traffic that comp.lang.ml is getting and 3-4X
that of the equivalent F#.group.

Of course fast pattern-matching and static typing are in there
already.

'nuff said from me

Mark
From: Jon Harrop
Subject: Re: some misconceptions
Date: 
Message-ID: <68fbae45-d476-421e-baa2-e4d061b6344c@f36g2000hsa.googlegroups.com>
On 25 Jun, 09:12, Mark Tarver <··········@ukonline.co.uk> wrote:
> From Jon Harrop
> QUOTE
> From an overall perspective, I think Qi was always doomed to failure
> due to market issues. Specifically, it is dwarfed by statically-typed
> FPLs like OCaml and F# on one side and dynamic languages like Lisp and
> Erlang on the other side. All of these competitors have far more
> tutorial information out there, far better tools, far more libraries
> and are far more mature. Qi would have to offer something amazing just
> to become visible but it doesn't.
> UNQUOTE
>
> Some misconceptions here.
>
> 1.  The range of available libraries for Qi is exactly the same as
> that for CL i.e. vast.  Qi is designed and implemented for CL.

All of these languages can theoretically call libraries written in all
other languages. In practice, there are far more F# libraries written
in F# than there are Qi libraries written in Qi and there are far
more .NET libraries callable from F# than Common Lisp libraries
callable from Qi.

> 2.  The whole Qi system is documented in > 300 pages including
> correctness proofs.  There's no shortage of material to consult.

All of the languages I cited have orders of magnitude more material
available about them. The Qi documentation is a fraction the length of
my latest F# book, let alone the other books (Foundations of F# and
Expert F#).

> If you get stuck there are 112 members of the Qi group to call upon.

Then the F# Hub forums already have ~50x more members than Qilang:

http://cs.hubfs.net/forums/default.aspx

> 3.  Qi since Jan 2007 has had over 2000 downloads,

And Qi is free. Our F#.NET Journal is a commercial product that
already sees 2,000 downloads every month.

> and Qilang has about 2/3 of the web traffic that comp.lang.ml

Qilang has only 1/8th the traffic of one of the OCaml groups and
1/40th the traffic of one of the Haskell groups:

Qilang:     24
fa.caml:    203
fa.haskell: 1083

http://groups.google.com/group/Qilang/about
http://groups.google.com/group/fa.caml/about
http://groups.google.com/group/fa.haskell/about

> is getting and 3-4X that of the equivalent F#.group.

No. The F# Hub has 30x the activity of Qilang.

> Of course fast pattern-matching and static typing are in there
> already.

According to your own benchmark results, Qi is almost 2x slower than
unoptimized OCaml:

http://www.lambdassociates.org/studies/study10.htm

Moreover, by building upon Lisp instead of a modern framework you
forego the benefits of a modern environment, such as the ability to
take advantage of multicore computers.

I don't want to diss your work, Mark, but the idea that Qi is even in
the same ballpark as any of those other functional languages is just
insane, let alone Microsoft's work. My point was that unpopularity
does not reflect a technical failing.

Cheers,
Jon.
From: Pascal J. Bourguignon
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <7c3an3nver.fsf@pbourguignon.anevia.com>
Jon Harrop <···@ffconsultancy.com> writes:

> Pascal J. Bourguignon wrote:
>> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
>> something better: smooth and easy meta-programming.
>
> Mathematica has both pattern matching and easy metaprogramming. Could a next
> generation Lisp also bundle both?

The question is still whether this is something that needs to be done
at the implementation level.

Most of lisp just doesn't need specific implementation support, but
can be implemented directly over a core lisp.  (Without going down to
the theorical pure lambda calculus, you could implement CL  just with
its 17 special operators, and all the rest implemented above them).

So the point is that if you, as a lisp programmer, feel the need for a
pattern matcher, then you can implement it yourself in lisp as a
library, and it will be as integrated to the language as any other CL
operator, such as CLOS or LOOP for example.

For a counter example, given that there is no (standard) primitive in
CL allowing us to access the underlying OS and FS, we cannot implement
(portably) lisp pathnames and I/O as a library over the OS and FS.
Instead, CL provides support for pathnames and I/O.

But I don't remember you citing any feature of pattern matching that
would need implementation level support.    On the contrary, we have
the example of several pattern matcher libraries.

I'm inciting you to write it, because you seem to be almost alone
wanting to have it.  The existing pattern matchers are used when they
are needed, but obviously most lisp programmers don't feel the need
for them in all circumstances.

-- 
__Pascal Bourguignon__
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <A9udnVnCBt53Q_3VnZ2dneKdnZydnZ2d@posted.plusnet>
Pascal J. Bourguignon wrote:
> Jon Harrop <···@ffconsultancy.com> writes:
>> Pascal J. Bourguignon wrote:
>>> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
>>> something better: smooth and easy meta-programming.
>>
>> Mathematica has both pattern matching and easy metaprogramming. Could a
>> next generation Lisp also bundle both?
> 
> The question is still whether this is something that needs to be done
> at the implementation level.
>
> Most of lisp just doesn't need specific implementation support, but
> can be implemented directly over a core lisp.  (Without going down to
> the theorical pure lambda calculus, you could implement CL  just with
> its 17 special operators, and all the rest implemented above them).

If we were talking about Scheme I'd agree but isn't the point of Lisp that
you take a bare bones metacircular evaluator and add lots of useful
features in a standard library such that users have a decent foundation to
build upon?

> So the point is that if you, as a lisp programmer, feel the need for a
> pattern matcher, then you can implement it yourself in lisp as a
> library, and it will be as integrated to the language as any other CL
> operator, such as CLOS or LOOP for example.
> 
> But I don't remember you citing any feature of pattern matching that
> would need implementation level support. On the contrary, we have 
> the example of several pattern matcher libraries.

That is precisely the problem. Users don't want a multitude of crap pattern
matching libraries and the ability to Greenspun their own. Users want a
single decent implementation.

> I'm inciting you to write it, because you seem to be almost alone
> wanting to have it.

Far more programmers choose languages with pattern matching (e.g. SML,
OCaml, Haskell, F#, Scala, Mathematica) over Lisp/Scheme.

> The existing pattern matchers are used when they are needed, but obviously
> most lisp programmers don't feel the need for them in all circumstances.

Most Java programmers don't feel the need for closures.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Bruce Nagel
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <slrng636n0.866.nagelbh@localhost.localdomain.com>
On 2008-06-24, Jon Harrop <···@ffconsultancy.com> wrote:
> Pascal J. Bourguignon wrote:

>> But I don't remember you citing any feature of pattern matching that
>> would need implementation level support. On the contrary, we have 
>> the example of several pattern matcher libraries.

> That is precisely the problem. Users don't want a multitude of crap pattern
> matching libraries and the ability to Greenspun their own. Users want a
> single decent implementation.

"Users want"... *Which* users?

>> I'm inciting you to write it, because you seem to be almost alone
>> wanting to have it.

> Far more programmers choose languages with pattern matching (e.g. SML,
> OCaml, Haskell, F#, Scala, Mathematica) over Lisp/Scheme.

I'm pretty sure they are welcome to them.

>> The existing pattern matchers are used when they are needed, but obviously
>> most lisp programmers don't feel the need for them in all circumstances.

> Most Java programmers don't feel the need for closures.

Boy. you're doing everything you can to make friends around here, aren't you?

Bruce
-- 
·······@freeshell.org    	
"Every normal man must be tempted, at times, to spit on his hands, 
hoist the black flag, and begin slitting throats." 
(H. L. Mencken)
From: Kenny
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <4861a6ec$0$7329$607ed4bc@cv.net>
Bruce Nagel wrote:
> On 2008-06-24, Jon Harrop <···@ffconsultancy.com> wrote:
>> Most Java programmers don't feel the need for closures.
> 
> Boy. you're doing everything you can to make friends around here, aren't you?
> 

Sadly, yes. Other NGs simply tell him to f*ck off and then ignore him. 
The denizens of c.l.l are so desperate for company we actually talk with 
him in spite of his best efforts to play the perfect idiot. Jon has 
found a home.

c.l.l -- send us your tired, your weak, your static-typed,...

kt
From: Marco Antoniotti
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <bea9bdbf-77f2-4dee-9bb9-36bd6e66ab67@x35g2000hsb.googlegroups.com>
On 24 Giu, 13:44, Jon Harrop <····@ffconsultancy.com> wrote:
> Pascal J. Bourguignon wrote:
> > Jon Harrop <····@ffconsultancy.com> writes:
> >> Pascal J. Bourguignon wrote:
> >>> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
> >>> something better: smooth and easy meta-programming.
>
> >> Mathematica has both pattern matching and easy metaprogramming. Could a
> >> next generation Lisp also bundle both?
>
> > The question is still whether this is something that needs to be done
> > at the implementation level.
>
> > Most of lisp just doesn't need specific implementation support, but
> > can be implemented directly over a core lisp.  (Without going down to
> > the theorical pure lambda calculus, you could implement CL  just with
> > its 17 special operators, and all the rest implemented above them).
>
> If we were talking about Scheme I'd agree but isn't the point of Lisp that
> you take a bare bones metacircular evaluator and add lots of useful
> features in a standard library such that users have a decent foundation to
> build upon?
>
> > So the point is that if you, as a lisp programmer, feel the need for a
> > pattern matcher, then you can implement it yourself in lisp as a
> > library, and it will be as integrated to the language as any other CL
> > operator, such as CLOS or LOOP for example.
>
> > But I don't remember you citing any feature of pattern matching that
> > would need implementation level support. On the contrary, we have
> > the example of several pattern matcher libraries.
>
> That is precisely the problem. Users don't want a multitude of crap pattern
> matching libraries and the ability to Greenspun their own. Users want a
> single decent implementation.

... as in CL-UNIFICATION (shameless plug: http://common-lisp.net/project/cl-unification)?
:)

And no.  It is *NOT* integrated with the compiler.  Don't start
confusing levels again.

The point is that, given CL-UNIFICATION, DEFINE-COMPILER-MACRO and the
MOP you could in principle build fast pattern matching function
definitions and calls directly in CL.

Cheers
--
Marco
From: Vend
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <3032936b-b3d3-4c51-bd34-51b9fe6f1025@x41g2000hsb.googlegroups.com>
On 24 Giu, 09:29, ····@informatimago.com (Pascal J. Bourguignon)
wrote:
> Jon Harrop <····@ffconsultancy.com> writes:
> > Pascal J. Bourguignon wrote:
> >> Well ok.  Indeed,  lisp has no native pattern matcher.  But it has
> >> something better: smooth and easy meta-programming.
>
> > Mathematica has both pattern matching and easy metaprogramming. Could a next
> > generation Lisp also bundle both?
>
> The question is still whether this is something that needs to be done
> at the implementation level.
>
> Most of lisp just doesn't need specific implementation support, but
> can be implemented directly over a core lisp.  (Without going down to
> the theorical pure lambda calculus, you could implement CL  just with
> its 17 special operators, and all the rest implemented above them).
>
> So the point is that if you, as a lisp programmer, feel the need for a
> pattern matcher, then you can implement it yourself in lisp as a
> library, and it will be as integrated to the language as any other CL
> operator, such as CLOS or LOOP for example.
>
> For a counter example, given that there is no (standard) primitive in
> CL allowing us to access the underlying OS and FS, we cannot implement
> (portably) lisp pathnames and I/O as a library over the OS and FS.
> Instead, CL provides support for pathnames and I/O.
>
> But I don't remember you citing any feature of pattern matching that
> would need implementation level support.    On the contrary, we have
> the example of several pattern matcher libraries.
>
> I'm inciting you to write it, because you seem to be almost alone
> wanting to have it.  The existing pattern matchers are used when they
> are needed, but obviously most lisp programmers don't feel the need
> for them in all circumstances.
>
> --
> __Pascal Bourguignon__

There is a pattern matching library in PLT Scheme.
I used it to implement the performance test example on the Qi website.
It works but it's quite slow (takes about 20 seconds to complete the
test on my Pentium 4 3 Ghz):

(require (lib "match.ss"))

(define simplify
  (match-lambda
    ((op a b) (s op (simplify a) (simplify b)))
    (a a)))

(define s
  (match-lambda*
    (('+ (and m (? number?)) (and n (? number?))) (+ m n))
    (('+ 0 f) f)
    (('+ f 0) f)
    (('+ a ('+ b c)) (simplify (+ (+ a b) c)))
    (('* (and m (? number?)) (and n (? number?))) (* m n))
    (('* 0 f) 0)
    (('* f 0) 0)
    (('* 1 f) f)
    (('* f 1) f)
    (('* a ('* b c)) (simplify (* (* a b) c)))
    ((op a b) (list op a b))))

(define (tt)
  (time (test 10000000)))

(define (test n)
  (if (zero? n)
      0
      (begin (simplify '[* x [+ [+ [* 12 0] [+ 23 8]] y]]) (test (- n
1)))))
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <d185645gqabql8anh47dq9b20uu23hnf3g@4ax.com>
On Wed, 25 Jun 2008 07:28:52 -0700 (PDT), Vend <······@virgilio.it>
wrote:

>
>There is a pattern matching library in PLT Scheme.
>I used it to implement the performance test example on the Qi website.
>It works but it's quite slow (takes about 20 seconds to complete the
>test on my Pentium 4 3 Ghz):

It's slow for two reasons: 1) its not being compiled, and 2) the
matching code is generated at first use and discarded when it goes out
of scope.  You need to require the compiler and cache the pattern
matching function if you intend to reuse it.  There is a function in
the library that will create the pattern without applying it.


>(require (lib "match.ss"))
>
>(define simplify
>  (match-lambda
>    ((op a b) (s op (simplify a) (simplify b)))
>    (a a)))
>
>(define s
>  (match-lambda*
>    (('+ (and m (? number?)) (and n (? number?))) (+ m n))
>    (('+ 0 f) f)
>    (('+ f 0) f)
>    (('+ a ('+ b c)) (simplify (+ (+ a b) c)))
>    (('* (and m (? number?)) (and n (? number?))) (* m n))
>    (('* 0 f) 0)
>    (('* f 0) 0)
>    (('* 1 f) f)
>    (('* f 1) f)
>    (('* a ('* b c)) (simplify (* (* a b) c)))
>    ((op a b) (list op a b))))
>
>(define (tt)
>  (time (test 10000000)))
>
>(define (test n)
>  (if (zero? n)
>      0
>      (begin (simplify '[* x [+ [+ [* 12 0] [+ 23 8]] y]]) (test (- n
>1)))))

George
--
for email reply remove "/" from address
From: Vend
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <881029a6-09d4-42c4-abc1-6c04f457d019@e53g2000hsa.googlegroups.com>
On 25 Giu, 21:52, George Neuner <·········@/comcast.net> wrote:
> On Wed, 25 Jun 2008 07:28:52 -0700 (PDT), Vend <······@virgilio.it>
> wrote:
>
>
>
> >There is a pattern matching library in PLT Scheme.
> >I used it to implement the performance test example on the Qi website.
> >It works but it's quite slow (takes about 20 seconds to complete the
> >test on my Pentium 4 3 Ghz):
>
> It's slow for two reasons: 1) its not being compiled, and 2) the
> matching code is generated at first use and discarded when it goes out
> of scope.

How do I compile with DrScheme? I've seen a make executable function
in the menu, but I'm not sure wether it generates native code or
bytecode warped in an interpeter.

>  You need to require the compiler and cache the pattern
> matching function if you intend to reuse it.  There is a function in
> the library that will create the pattern without applying it.

I've used match-lambda and match-lambda*. Don't they generate pattern-
matching procedures?
I've actually took a look at the macro expansion and it generates a
control structure with lots of nested lets and ifs.

> >(require (lib "match.ss"))
>
> >(define simplify
> >  (match-lambda
> >    ((op a b) (s op (simplify a) (simplify b)))
> >    (a a)))
>
> >(define s
> >  (match-lambda*
> >    (('+ (and m (? number?)) (and n (? number?))) (+ m n))
> >    (('+ 0 f) f)
> >    (('+ f 0) f)
> >    (('+ a ('+ b c)) (simplify (+ (+ a b) c)))
> >    (('* (and m (? number?)) (and n (? number?))) (* m n))
> >    (('* 0 f) 0)
> >    (('* f 0) 0)
> >    (('* 1 f) f)
> >    (('* f 1) f)
> >    (('* a ('* b c)) (simplify (* (* a b) c)))
> >    ((op a b) (list op a b))))
>
> >(define (tt)
> >  (time (test 10000000)))
>
> >(define (test n)
> >  (if (zero? n)
> >      0
> >      (begin (simplify '[* x [+ [+ [* 12 0] [+ 23 8]] y]]) (test (- n
> >1)))))
>
> George
> --
> for email reply remove "/" from address
From: Vend
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <de3800b3-add4-4a37-92e6-6b7e27759458@x41g2000hsb.googlegroups.com>
On 25 Giu, 23:39, Vend <······@virgilio.it> wrote:
> On 25 Giu, 21:52, George Neuner <·········@/comcast.net> wrote:
>
> > On Wed, 25 Jun 2008 07:28:52 -0700 (PDT), Vend <······@virgilio.it>
> > wrote:
>
> > >There is a pattern matching library in PLT Scheme.
> > >I used it to implement the performance test example on the Qi website.
> > >It works but it's quite slow (takes about 20 seconds to complete the
> > >test on my Pentium 4 3 Ghz):
>
> > It's slow for two reasons: 1) its not being compiled, and 2) the
> > matching code is generated at first use and discarded when it goes out
> > of scope.
>
> How do I compile with DrScheme? I've seen a make executable function
> in the menu, but I'm not sure wether it generates native code or
> bytecode warped in an interpeter.
>
> >  You need to require the compiler and cache the pattern
> > matching function if you intend to reuse it.  There is a function in
> > the library that will create the pattern without applying it.
>
> I've used match-lambda and match-lambda*. Don't they generate pattern-
> matching procedures?
> I've actually took a look at the macro expansion and it generates a
> control structure with lots of nested lets and ifs.
>
> > >(require (lib "match.ss"))
>
> > >(define simplify
> > >  (match-lambda
> > >    ((op a b) (s op (simplify a) (simplify b)))
> > >    (a a)))
>
> > >(define s
> > >  (match-lambda*
> > >    (('+ (and m (? number?)) (and n (? number?))) (+ m n))
> > >    (('+ 0 f) f)
> > >    (('+ f 0) f)
> > >    (('+ a ('+ b c)) (simplify (+ (+ a b) c)))
> > >    (('* (and m (? number?)) (and n (? number?))) (* m n))
> > >    (('* 0 f) 0)
> > >    (('* f 0) 0)
> > >    (('* 1 f) f)
> > >    (('* f 1) f)
> > >    (('* a ('* b c)) (simplify (* (* a b) c)))
> > >    ((op a b) (list op a b))))
>
> > >(define (tt)
> > >  (time (test 10000000)))
>
> > >(define (test n)
> > >  (if (zero? n)
> > >      0
> > >      (begin (simplify '[* x [+ [+ [* 12 0] [+ 23 8]] y]]) (test (- n
> > >1)))))

Update:
I've tried it with Erlang and it's faster (takes about 8 seconds).
Isn't Erlang interpreted?

-module(qiPerfTest).
-export([simplify/1, test/1, tt/0]).

simplify([Op, A, B]) -> s(Op, simplify(A), simplify(B));
simplify(A) -> A.

s('+', A, B) when is_number(A), is_number(B) -> A + B;
s('+', 0, F) -> F;
s('+', F, 0) -> F;
s('+', A, ['+', B, C]) -> simplify(['+', ['+', A, B], C]);
s('*', A, B) when is_number(A), is_number(B) -> A * B;
s('*', 0, _) -> 0;
s('*', _, 0) -> 0;
s('*', 1, F) -> F;
s('*', F, 1) -> F;
s('*', A, ['*', B, C]) -> simplify(['*', ['*', A, B], C]);
s(Op, A, B) -> [Op, A, B].

test(0) -> 0;
test(N) ->
	simplify(['*', x, ['+', ['+', ['*', 12, 0], ['+', 23, 8]], y]]),
	test(N - 1).

tt() ->
	timer:tc(qiPerfTest, test, [10000000]).
From: George Neuner
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <9hga64d3cd5nm6octnku6gq98lgedsh8o6@4ax.com>
On Wed, 25 Jun 2008 14:39:30 -0700 (PDT), Vend <······@virgilio.it>
wrote:

>On 25 Giu, 21:52, George Neuner <·········@/comcast.net> wrote:
>> On Wed, 25 Jun 2008 07:28:52 -0700 (PDT), Vend <······@virgilio.it>
>> wrote:
>>
>>
>>
>> >There is a pattern matching library in PLT Scheme.
>> >I used it to implement the performance test example on the Qi website.
>> >It works but it's quite slow (takes about 20 seconds to complete the
>> >test on my Pentium 4 3 Ghz):
>>
>> It's slow for two reasons: 1) its not being compiled, and 2) the
>> matching code is generated at first use and discarded when it goes out
>> of scope.
>
>How do I compile with DrScheme? I've seen a make executable function
>in the menu, but I'm not sure wether it generates native code or
>bytecode warped in an interpeter.
>
>>  You need to require the compiler and cache the pattern
>> matching function if you intend to reuse it.  There is a function in
>> the library that will create the pattern without applying it.
>
>I've used match-lambda and match-lambda*. Don't they generate pattern-
>matching procedures?
>I've actually took a look at the macro expansion and it generates a
>control structure with lots of nested lets and ifs.
>
>> >(require (lib "match.ss"))
>>
>> >(define simplify
>> >  (match-lambda
>> >    ((op a b) (s op (simplify a) (simplify b)))
>> >    (a a)))
>>
>> >(define s
>> >  (match-lambda*
>> >    (('+ (and m (? number?)) (and n (? number?))) (+ m n))
>> >    (('+ 0 f) f)
>> >    (('+ f 0) f)
>> >    (('+ a ('+ b c)) (simplify (+ (+ a b) c)))
>> >    (('* (and m (? number?)) (and n (? number?))) (* m n))
>> >    (('* 0 f) 0)
>> >    (('* f 0) 0)
>> >    (('* 1 f) f)
>> >    (('* f 1) f)
>> >    (('* a ('* b c)) (simplify (* (* a b) c)))
>> >    ((op a b) (list op a b))))
>>
>> >(define (tt)
>> >  (time (test 10000000)))
>>
>> >(define (test n)
>> >  (if (zero? n)
>> >      0
>> >      (begin (simplify '[* x [+ [+ [* 12 0] [+ 23 8]] y]]) (test (- n
>> >1)))))
>>
>> George


Sorry, I goofed.  It has been pointed out that I confused the regex
lib with the match lib.  

match-lambda creates the function at macro expansion time and so the
result is always compiled.

George
--
for email reply remove "/" from address
From: Eli Barzilay
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <8bff678b-cfcd-4b07-bc38-5cd4fc7ec9c0@x41g2000hsb.googlegroups.com>
On Jun 25, 3:52 pm, George Neuner <·········@/comcast.net> wrote:
>
> >There is a pattern matching library in PLT Scheme.
> >I used it to implement the performance test example on the Qi website.
> >It works but it's quite slow (takes about 20 seconds to complete the
> >test on my Pentium 4 3 Ghz):
>
> It's slow for two reasons: 1) its not being compiled, and 2) the
> matching code is generated at first use and discarded when it goes out
> of scope.  You need to require the compiler and cache the pattern
> matching function if you intend to reuse it.  There is a function in
> the library that will create the pattern without applying it.

This paragraph is wrong: you make four statements that are all false.

--
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli
Barzilay:
                  http://www.barzilay.org/                 Maze is
Life!
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <4d5fe665-eaae-4629-ada6-c1b95a84cb87@a1g2000hsb.googlegroups.com>
On 25 Jun, 15:28, Vend <······@virgilio.it> wrote:
> There is a pattern matching library in PLT Scheme.
> ...

Yes, absolutely. Pattern matching appears to be far more widely used
in Scheme than it is in Common Lisp and, consequently, Scheme's
libraries are of a much higher quality.

Cheers,
Jon.
From: Marco Antoniotti
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <5fc8882f-6011-4601-ba63-ab945c0dbab5@l64g2000hse.googlegroups.com>
On Jun 28, 9:10 pm, Jon Harrop <····@ffconsultancy.com> wrote:
> On 25 Jun, 15:28, Vend <······@virgilio.it> wrote:
>
> > There is a pattern matching library in PLT Scheme.
> > ...
>
> Yes, absolutely. Pattern matching appears to be far more widely used
> in Scheme than it is in Common Lisp and, consequently, Scheme's
> libraries are of a much higher quality.

The presence of *a* pattern matching library in *one* (of the 42
godzillions more or less incompatible) Scheme implementations does not
warrant sweeping generalizations.  Since I don't know much about the
quality of "Scheme's libraries" (apart from the fact that most of them
are needed to implement what CL had in CLtL1 :) ) I won't comment on
them, but I fail to see the connection between presence of pattern
matching and the quality of CL libraries.  Or of any other language
for what that matters.  Pattern Matching helps, but C does not have
it.

Cheers
--
Marco
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <rem-2008jul05-003@yahoo.com>
> Date: Tue, 24 Jun 2008 09:29:00 +0200
Why this response is so belated:
  <http://groups.google.com/group/misc.misc/msg/cea714440e591dd2>
= <······················@yahoo.com>
> From: ····@informatimago.com (Pascal J. Bourguignon)
> > Mathematica has both pattern matching and easy metaprogramming.
> > Could a next generation Lisp also bundle both?
> The question is still whether this is something that needs to be done
> at the implementation level.

I agree with your almost-rhetorical question. IMO the current
generation Lisp is already just fine for building such tools and
distributing them as third-party libraries (then eventually
distributed by vendors *with* new versions of Lisp).

> Most of lisp just doesn't need specific implementation support,
> but can be implemented directly over a core lisp.  (Without going
> down to the theorical pure lambda calculus, you could implement CL
> just with its 17 special operators, and all the rest implemented
> above them).

I think you're stretching the truth a little bit here. System hooks
are also necessary to be provided either directly by the vendor or
indirectly via a general purpose facility such as LAP or SYSLISP or
C-interface.

> So the point is that if you, as a lisp programmer, feel the need
> for a pattern matcher, then you can implement it yourself in lisp
> as a library, and it will be as integrated to the language as any
> other CL operator, such as CLOS or LOOP for example.

I agree, and encourage you to establish the equivalent of "Consumer
Reports" to evaluate all the various libraries that are available
so that people won't be burned by seeing five different free
libraries that all claim to do the same thing and whichever is
chosen turns out to be so buggy as to have been a complete waste of
time. Before Lisp-CR can be developed, we need to decide how to
categorize/index the various kinds of functionality that libraries
perform. IMO the best way is via intentional datatypes. Each
library defines some intentional datatype and provides various
utilities to work with that new datatype (and also with more
primitive datatypes that already existed), in much the same way
that OOP (as in Java) has each Class define a type of data
(instances of that Class) with various Methods specialized to that
class. So do you like my idea and are you willing to work with me
and others to describe for each existant Lisp library what
intentional datatype it implements as a first step towards LIsp-CR?

> But I don't remember you [Harrop] citing any feature of pattern
> matching that would need implementation level support.    On the
> contrary, we have the example of several pattern matcher libraries.

Would you be willing to create a Web page that lists all these
particular libraries, describing for each what intentional datatype
it implements, and comparing the functionality of these various
libraries? Classifying and describing (and eventually evaluating
per CR) all the Lisp libraries in the world would be a daunting
task, but perhaps doing all that for just the pattern matching
libraries would be doable in short term and establish an example of
a small subset of Lisp-CR, like the first step of a journey of a
thousand miles?

> I'm inciting you to write it, because you seem to be almost alone
> wanting to have it.  The existing pattern matchers are used when
> they are needed, but obviously most lisp programmers don't feel the
> need for them in all circumstances.

Actually I might like to use one of them myself, if I could find a
Web page that described them all in an understandable way so that I
could efficiently evaluate which if any would serve any need I
might ever have. It's sorta analagous to if I never opened the
Sears catalog I would never see all those various kinds of things
for sale that I might find a use for, and I wouldn't even know what
I'm missing.

By the way, I really wish there was a decent free version of Common
Lisp that ran under MacOS 7.5.5. Back when my Mac Plus with System
6.0.3 was still running (from 1990 to 1999), Macintosh Allegro
Common Lisp (which I got for free due to my work at Stanford) ran
fine, but it freezes the system on System 7.5.5 (the Mac Performa I
got in 1998). XLISP is total crap, even more crap than I complained
about weeks ago, I learned a few days ago: It doesn't even do
integer arithmetic correctly, it doesn't have BIGNUMS!!! And some
other version of Lisp (PowerLisp 2.01) that I tried a few months
ago just briefly tended to lock up the whole machine so I deleted it.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Algebraic datatypes (was: Parallel Common-Lisp with at least 64 processors?)
Date: 
Message-ID: <rem-2008jun25-002@yahoo.com>
> From: George Neuner <·········@/comcast.net>
> Since Lisp doesn't have algebraic datatypes,

That's a false/misleading premise. Lisp doesn't have a *built-in*
algebraic datatype, but it can have a user-defined *intentional*
algebraic datatype any time you want. In a way, intentional
datatypes are "more important" than built-in datatypes, because
built-in datatypes are of extremely limited expressive power, not
expressive enough for even the most trivial application, in a sense
not even expressive enough to cover the full intentions of the
text-string-data and print-operation involved in notorious "Hello,
World!" program.
 (In actuality, the program may print that text as part of a
   US-ASCII stream of bytes on standard output (on Unix/Linux), or
   as physical printing on paper by console typewriter (on IBM 1620
   or IBM 1130), or in code form such as smiley face on console
   lights (on Altair 8800), or as part of a US-ASCII stream of
   bytes on RS-232 serial port (on my Clem-Smith-homebrew MOS-6502
   computer), or as HTTP output to a Web browser (from any of my
   Hello+CGI demos), or as printing of holes in paper tape (some
   other machine configuration described in the Hello World
   collection), etc. etc.
  Each case is different, a different *actual* way to effect the
   common *intention* of announcing to the world (actually any
   handy observers, "Hey Rocky, watch me pull a hello world out of
   this computer") the fact that the programmer has succeeded in
   installing and running the program on yet another computer
   and/or using yet another programming language and/or being
   accomplished by yet another novice programmer or experienced
   programmer learning a new language and/or system.
  It's only the *intention* that is the same across all hello-world
   programs, and that *intention* is *not* expressible in *any* of
   those computer languages/implementations AFAIK.)
 (OK, there might be one exception: The natural-language A.I. folks
   might actually have solved the problem of representing the full
   meaning/intention of a suitable subset of natural language in
   some machine-data format, probably Lisp data, maybe Java data,
   such that their version of the hello-world program really *does*
   include an expression of the *intent* of the hello-world program.
   If anyone up-to-date in such research can report such a capability,
   I'd be glad to stand corrected here.
  And the robotics A.I. folks might have used this as a way to
   instruct a robot to implement the full semantics of actually
   trying to get attention of people to the existance of this
   version of the hello-world program. [un]fortunately these folks
   are not nasty enough to tell their hello-world robot how to spam
   or telemarket the announcement, so I haven't gotten their
   announcement in my mailbox yet, AFAIK. I have tens of thousands
   of spam I've never looked at, so maybe it's there and I haven't
   seen it because I didn't know to look for it specially. I'm
   pretty sure this intelligent robot hasn't yet purchased
   advertising time on any of the major broadcast networks. But
   maybe this intelligent robot is responsible for the current
   speculation run-up on petroleum prices, and as soon as the robot
   cashes out it'll have enough funds to start advertising on TV.
  But IMO it's more likely a smart system that is hardwired to spam
   a hello-world message would be contrived by some nasty person to
   pretend to be an A.I. hello-world program. And it's more likely
   somebody who just got lucky on the stock market, or happened to
   win a major lottery, would try to make himself famous by
   purchasing dishonest TV ads to claim an A.I. result that in fact
   hasn't been achieved despite 60 years of A.I.
   natural-language-understanding research.
  I'm sure if any major natural-language A.I. result had been
   achieved, Google would purchase rights to use it in their search
   engine, at which point search results would be automatically
   categorized according to the semantics of the keyword usage
   rather than just listed linearily and ranked per best-guess of
   relevance. Like if you search for AIM-8, it'd automatically
   organize the results into:
     one major category for in-ceiling speakers,
     another for a mutant phenotype,
     another for biometrics capability, such as provided by aim8.com,
     another the juxtiposition of terms in
      "Ready, Fire, Aim - 8 Common Search Advertising Mistakes To Avoid",
     another for Bio-Impedance Technology, Inc.
      (or does that belong within the biometrics category?),
     another for "Hunter.s - KlaSH#PG Aim 8%",
      whatever that's supposed to mean,
     another for the world's first computer generated encyclopedia,
     another for juxtiposition of the terms on veoh.com,
   and still there'd be room for McCarthy's seminal paper on Lisp
   in the top ten results, with one to spare.)

> you need to compare the closest equivalent - generic function dispatch.

I dispute that a little teeny bit. There are two different issues:
-1- Having a **data** type with the appropriate *implied* semantics.
-2- Having software to actually implement those semantics on that data type.
For 1, it's sufficient to write a constructor to build lists whose
 first element is :ALGEBRAIC and whose remaining elements are
 whatever components are needed for an algebraic-data-type object.
 Using CLOS, we could define an explicit Object type to encapsulate
 the components in a CLOS object, to make dispatching cleaner, but
 that doesn't change the fact already obtained via a constructor
 that uses CONS in the old-fashioned way. There are several other
 possible ways to fake a new intentional datatype without needing
 CLOS, such as a constructor that creates an uninterned symbol which
 has a magic tag on the property list to identify this symbol as
 being of the :ALGEBRAIC intentional class and has other tags to
 locate the various components. How you implment the algebraic
 intentional data type is not super-important, since there are so
 many simple ways to do it in Lisp. The point is that it's trivial
 to do it any of those ways, at the whim of the lead programmer.
For 2, what you say sounds like it would work.

> Not only is that heavily optimized in most Lisps, but it directly
> corresponds to ML's pattern matching alternation.

Which of my proposed representations for the data object of that
intentional class would most likely lead to highly-optimized code
after block-compilation of the source file has occurred? (Or did
you have some other internal representation in mind when you made
that claim about generating optimized code?)

> I can't recall for certain, but I don't think any of your Lisp
> micro-benchmarks focused on generic functions.

That would be a wonderful way to show how Common Lisp (per some
specific implementations which do indeed generate optimized code as
you claim) is so much *faster* than Java or C++ or (you name it,
latest fad being promoted in comp.lang.lisp and comp.programming).

> >I've led you to water but I can't make you drink.
> We'll drink when you turn the water into wine.

I was going to say something like
  "I'm quite content to drink *potable* water, I don't actually
   like wine anyway, but the water the harper is leading us to
   smells more like urine than potable water",
but before I could post this I saw that somebody already got the
idea first by retorting that the harper is pissing in the well, and
I will yield to that and withdraw my alternative expression of the
same idea.

Hey, Rocky! Watch me pull a brand-new jargon/slang term out of my head:
"harp water" = urine (category 311.5 in Roget's International
                      Thesaurus, ISBN 0-690-00011-1)
Emptomology [sp?] (Google doesn't provide a spelling correction
 like it usually does, but it claims only 10 search results total,
 so this has *gotta* be a wrong spelling.
Enptomology (Google provides no matches at all, and suggests entomology)
 <joke>That spelling sounds like the study of ents, those little
       ensicts that come into my kitchen foraging for food.</joke>
Entomology - 6,740,000 matches, including:
  <http://en.wikipedia.org/wiki/Entomology>
  <rearranged>the scientific study of insects ... not to be confused with
               Etymology, the study of the history of words.</rearranged>
Etymology - 7,470,000 matches, including:
  <http://en.wikipedia.org/wiki/Etymology>
   Etymology is the study of the history of words -- when they entered a
   language, from what source, and how their form and meaning have
   changed over time.
OK, now that I got the correct spelling of the tag for what I want
to say, here's what I want to say (not to compete some fellow, not
C.P.Snow, somebody else of similar reputation, whose name I can't
remember at the moment, who also invented his own slang, and later
wrote in his memoirs how he had originally conceived the new slang):

When somebody has invented a new tool or has become very fond of a
 recently-invented tool, and mistakenly believes this newly/recently
 invented tool will solve all problems that anybody has in a large
 field, if only everyone can be induced to try it;
And keeps touting that one new invention again and again, bragging
 about its good points, and ignoring rebuttal about its bad points;
And when frustration sets in, and the that person turns to harping
 about how everyone is ignoring this wonderful new tool, and how
 everybody is stupid for not immediately jumping onto the bandwagon
 of using this new tool;
At some point the harper may issue the claim "I've led you to
 water, but I can't make you drink" or some variation thereof;
An appropriate response is to note that the "water" isn't potable,
 it looks/smells/tastes more like urine.
Accordingly the urine, i.e. the so-called "water", offered by the
 harper, is hereby termed "harp water".

<not>Great!</not> I did a google search for
  urine 311.5
and the first match was
  <http://www.ncbi.nlm.nih.gov/pubmed/8223117>
          The post-operative serum luminescence intensity of patients was
          415.8 +/- 186.6 and that of urine was 311.5 +/- 163.5.
a totally freak coincidence, and the next several Google matches
seem to be duplicates of this same report in different formats on
different Web sites. It really sucks that Google's search engine is
incapable of grouping these essentially-identical uses of that
two-term combo within a single sub-category.

Next I did a google search for
  urine 311.5 thesaurus
and the first match was
  <http://www.its.berkeley.edu/library/tranlib/tranlib6.html>
    ACI 311.5R-97: Guide for Concrete Plant Inspection and Field
                   Testing of Ready-Mixed Concrete
and the next several matches didn't even mention 311.5 so I have no
idea why Google turned them up except maybe because it ran out of
all-my-terms matches and started giving me all-but-one term
matched. Not a single good match there.

Next I did a google search for
  urine 311.5 Roget's thesaurus
which told me
   Your search - urine 311.5 Roget's thesaurus - did not match any
   documents.
I conclude that Roget's Thesaurus is no longer online. (It was
online about 8-10 years ago, and I bookmarked it, but that link
doesn't produce anything decent any longer.) So you gotta buy the
book form of ISBN 0-690-00011-1 if you want to see the synonyms.

Gee, that whole section 311 (excrement) would be fun to test on
television or radio, to check which synonyms for various forms of
excrement would be allowed and which censored, if only George
Carlin were still alive to do it as part of a live broadcast of a
"comedy" skit, or perhaps during an interview on NPR or KKUP 91.5 FM.

So I wonder how fast "harp water" will make it to one of the online
jargon/slang databases?


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to try to read the text-picture in this box:

/-- (-: the following is on-topic, because it's an algebraic data type :-) --\
|   ,__ __                                                                   |
|  /|  |  |                                                        |         |
|   |  |  |             ,    __    _  _       _  _     _    _    __|    ,    |
|   |  |  |   |   |    / \_ /  \_ / |/ |     / |/ |   |/   |/   /  |   / \_  |
|   |  |  |_/  \_/|/    \/  \__/    |  |_/     |  |_/ |__/ |__/ \_/|_/  \/   |
|                /|                                                          |
|                \|                                                          |
|   _            _                                   _                       |
|  | |          | |           o                     | |                      |
|  | |      _   | |     _          _  _       __,   | |   __,   _            |
|  |/ \    |/   |/    |/ \_   |   / |/ |     /  |   |/   /  |  |/   -----    |
|  |   |_/ |__/ |__/  |__/    |_/   |  |_/   \_/|_/ |__/ \_/|/ |__/          |
|                    /|                                    /|                |
|                    \|                                    \|                |
|   _                                                                        |
|  | |                                                                       |
|  | |    ,_     __,        _  _     __   _|_                                |
|  |/ \_ /  |   /  |       / |/ |   /  \_  |                                 |
|   \_/     |_/ \_/|_/ o     |  |_/ \__/   |_/                               |
|                      /                                                     |
|                                                    _                       |
|                             |                     | |                      |
|            __    _  _     __|    _    ,_          | |    ,_     __,        |
|  |  |  |_ /  \_ / |/ |   /  |   |/   /  |   ----- |/ \_ /  |   /  |        |
|   \/ \/   \__/    |  |_/ \_/|_/ |__/    |_/        \_/     |_/ \_/|_/ o    |
\--------(Rendered by means of <http://www.schnoggo.com/figlet.html>)--------/
     (You don't need JavaScript or images to see that ASCII-text image!!
      You just need to view this in a fixed-pitch font such as Monaco.)

Then enter your best guess of the text (40-50 chars) into this TextField:
          +--------------------------------------------------+
          |                                                  |
          +--------------------------------------------------+
From: George Neuner
Subject: Re: Algebraic datatypes (was: Parallel Common-Lisp with at least 64 processors?)
Date: 
Message-ID: <jana64po6i3vv4j5a9mqkc7e3dd0hdfqkt@4ax.com>
On Wed, 25 Jun 2008 21:42:11 -0700,
··················@spamgourmet.com.remove (Robert Maas,
http://tinyurl.com/uh3t) wrote:

>> From: George Neuner <·········@/comcast.net>
>> Since Lisp doesn't have algebraic datatypes,
>
>That's a false/misleading premise. Lisp doesn't have a *built-in*
>algebraic datatype, but it can have a user-defined *intentional*
>algebraic datatype any time you want. In a way, intentional
>datatypes are "more important" than built-in datatypes, because
>built-in datatypes are of extremely limited expressive power, not
>expressive enough for even the most trivial application, in a sense
>not even expressive enough to cover the full intentions of the
>text-string-data and print-operation involved in notorious "Hello,
>World!" program.

Jon Harrop and I were discussing types as in ML ... not as in Lisp.
see

http://en.wikipedia.org/wiki/Algebraic_data_types

Lisp does not have these algebriac types.  

What Lisp does have is an extensible heirarchy of types which can be
identified and tested for.  A generic function's argument list forms a
algebriac pattern in the ML sense, and the dispatch process is quite
similar to ML's built-in pattern matching.


>
>> you need to compare the closest equivalent - generic function dispatch.
>
>I dispute that a little teeny bit. There are two different issues:
>-1- Having a **data** type with the appropriate *implied* semantics.
>-2- Having software to actually implement those semantics on that data type.
>For 1, it's sufficient to write a constructor to build lists whose
> first element is :ALGEBRAIC and whose remaining elements are
> whatever components are needed for an algebraic-data-type object.
> Using CLOS, we could define an explicit Object type to encapsulate
> the components in a CLOS object, to make dispatching cleaner, but
> that doesn't change the fact already obtained via a constructor
> that uses CONS in the old-fashioned way. There are several other
> possible ways to fake a new intentional datatype without needing
> CLOS, such as a constructor that creates an uninterned symbol which
> has a magic tag on the property list to identify this symbol as
> being of the :ALGEBRAIC intentional class and has other tags to
> locate the various components. How you implment the algebraic
> intentional data type is not super-important, since there are so
> many simple ways to do it in Lisp. The point is that it's trivial
> to do it any of those ways, at the whim of the lead programmer.
>For 2, what you say sounds like it would work.

I'm not trying to implement algebraic types ... I'm considering that
the arguments of a generic function are analogous to an algebriac type
and the method dispatch is a pattern match over them.


George
--
for email reply remove "/" from address
From: Patrick May
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <m2y74x9w36.fsf@spe.com>
Jon Harrop <···@ffconsultancy.com> writes:
> Patrick May wrote:
>> If your goal is to encourage adoption of F#, you're failing
>> miserably.  Your behavior in this newsgroup alone has eliminated
>> any interest I might have had in investigating it.  I suspect I'm
>> not alone in this view.
>
> I've led you to water but I can't make you drink.

     Your behavior here is more akin to pissing in the water.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          ···@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.uc5k26wgut4oq5@pandora.alfanett.no>
P� Sun, 22 Jun 2008 12:56:14 +0200, skrev Jon Harrop  
<···@ffconsultancy.com>:

> Robert Maas, http://tinyurl.com/uh3t wrote:
>> If you mean something more complicated as "pattern matching", you need  
>> to
>> say precisely what you mean.
>
> I was referring to ML-style pattern matching (including dynamic dispatch
> over nested algebraic data types) as seen in F#, OCaml, Haskell and SML.
>
> Most of the functionality of the symbolic simplifier I cited is encoded  
> in
> the pattern matching:
>
> let rec ( +: ) f g = match f, g with
>    | `Int n, `Int m -> `Int (n +/ m)
>    | `Int (Int 0), e | e, `Int (Int 0) -> e
>    | f, `Add(g, h) -> f +: g +: h
>    | f, g -> `Add(f, g)
> let rec ( *: ) f g = match f, g with
>    | `Int n, `Int m -> `Int (n */ m)
>    | `Int (Int 0), e | e, `Int (Int 0) -> `Int (Int 0)
>    | `Int (Int 1), e | e, `Int (Int 1) -> e
>    | f, `Mul(g, h) -> f *: g *: h
>    | f, g -> `Mul(f, g)
> let rec simplify = function
>    | `Int _ | `Var _ as f -> f
>    | `Add (f, g) -> simplify f +: simplify g
>    | `Mul (f, g) -> simplify f *: simplify g
>
> All three functions just dispatch over their arguments using pattern  
> matches
> ("match .. with ..." or "function ..."). The values they are matching are
> algebraic datatypes and integers in this case.
>
> Run ocaml with the "-dlambda" option and you can see the optimized  
> Lisp-like
> intermediate representation that OCaml's pattern match compiler  
> generates:
>
> (seq
>   (letrec
...
>
> Consider the enormous waste of time and effort involved in maintaining  
> Lisp
> code like this. In practice, Lispers just give up, write naive code and
> kiss goodbye to performance. That is precisely what I was alluding to.
>

No we just write a macro to do the expansion.
Or use method combination for type inference.

Code like this is typically only a few percent of the actual code in a  
real application.
Most of the time is spent on more mundane stuff like managing windows,  
printing reports, managing databases and the like.

This code example is so obviously tailored for OCalm type matching it  
actually says little about actual performance on a application level.

Earlier this week I wrote some code to pretty print 100!. You can look it  
up.
Now how would that code look in OCalm or F#?

--------------
John Thingstad
From: Jon Harrop
Subject: Re: When extensibility goes bad
Date: 
Message-ID: <vKKdnXD90cs79MPVnZ2dnUVZ8rGdnZ2d@posted.plusnet>
John Thingstad wrote:
> P� Sun, 22 Jun 2008 12:56:14 +0200, skrev Jon Harrop
> <···@ffconsultancy.com>:
>> Consider the enormous waste of time and effort involved in maintaining
>> Lisp
>> code like this. In practice, Lispers just give up, write naive code and
>> kiss goodbye to performance. That is precisely what I was alluding to.
> 
> No we just write a macro to do the expansion.

You aspire to write an ad-hoc, informally-specified, bug-ridden, slow
implementation of half of a modern pattern match compiler but, in practice,
the Lisp community lacks the direction and team skills required to do that
so they end up writing many incompatible pattern match compilers, none of
which are worth having. This is "when extensibility goes bad".

> Or use method combination for type inference.

Which doesn't even handle this trivial example let alone real code.

> Code like this is typically only a few percent of the actual code in a
> real application.

Of course, everything that Lisp makes unnecessarily difficult is rarely seen
in Lisp code.

> Most of the time is spent on more mundane stuff like managing windows,
> printing reports, managing databases and the like.

All of which would be done using pattern matching if it were available.

> This code example is so obviously tailored for OCalm type matching it
> actually says little about actual performance on a application level.

Pick an application, any application.

> Earlier this week I wrote some code to pretty print 100!. You can look it
> up. Now how would that code look in OCalm or F#?

Do you want anything beyond the library function run in a REPL that already
does pretty printing:

  Math.BigInt.factorial 100I

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Kenny
Subject: Re: When extensibility goes bad
Date: 
Message-ID: <485e97fa$0$5025$607ed4bc@cv.net>
Jon Harrop wrote:
> All of which would be done using pattern matching if it were available.

I wonder if you are aware that the Harrop harp has but one string? 
Anyone who has followed your lame attempts to build a business and 
extraordinary success at making yourself unlikable cannot help noticing 
it: all you ever talk about is pattern-matching. Pattern-matching is 
fun, I used it once. But no stupid pet trick, not even Cells, suffices 
for everything.

The most regular error in language design is, "Hey! Let's use it for 
/everything/!". Welcome to the club.

The only thing I read from your pattern obsession is that F# offers 
nothing else.

hth, kenny
From: Slobodan Blazeski
Subject: Re: When extensibility goes bad
Date: 
Message-ID: <3c49d5f9-7edf-43e2-bb67-7a44e019980c@z72g2000hsb.googlegroups.com>
On Jun 22, 8:20 pm, Kenny <·········@gmail.com> wrote:
> Jon Harrop wrote:
> > All of which would be done using pattern matching if it were available.
>
> I wonder if you are aware that the Harrop harp has but one string?
> Anyone who has followed your lame attempts to build a business and
> extraordinary success at making yourself unlikable cannot help noticing
> it: all you ever talk about is pattern-matching. Pattern-matching is
> fun, I used it once.
Bleh, pattern matching is poor man unification.  If you like  pattern
matching don't learn prolog, you'll be disappointed about artificial
limits of pattern matching in the name of the performance. Oh you
learned prolog already?

And about performance, I couldn't care less.I have a 8 cores doing
nothing 99.999% of the time.
Memory consumption is even less of an issue, with my 16GB of RAM even
running a hog of an OS, and don't get me even started about the HD.
The only area that still needs performance I'm interested in are
games, and after buying a new GPU even my 7 year
old Celeron 2.4  runs all of them without a glitch.
Screw performance, F#/OCaml are solving the wrong problem. Lisp has
fine unification lib, and even it's 1000
slower than F#/OCaml who cares?  Not me certainly.
Even on my year old laptop DualCore 2G Ram I could run a whole
company.  Processing power is dirt cheap nowdays.
> But no stupid pet trick, not even Cells, suffices
> for everything.
>
> The most regular error in language design is, "Hey! Let's use it for
> /everything/!". Welcome to the club.


>
> The only thing I read from your pattern obsession is that F# offers
> nothing else.
>
> hth, kenny
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: When extensibility goes bad
Date: 
Message-ID: <rem-2008jun23-005@yahoo.com>
> > All of which would be done using pattern matching if it were available.
> From: Kenny <·········@gmail.com>
> I wonder if you are aware that the Harrop harp has but one string?

What is he, the musical equivalent of a "hacker"? His instrument
has only one string, but he's very good at making do with that one
string, carefully pinching it between his fingernails at various
points along its length to shorten the wavelength hence raise the
frequency without damping the oscillations too much? I've seen
other musical hackers on TV, one who could blow musical tunes
through his underarm, one who could blow musical tunes through his
puffed mouth, one who could tap musical tunes on his tummy or arm I
forget which.

When I was a kid, my father used to blow tunes through a leaf of
grass folded in half. I myself have demonstrated playing a few
simple tunes using a old party horn where the spiral-windup paper
part has long torn off, but I'm nowhere near as skillful as those
musical hackers I've seen on TV.

Then there was that singer from South America (Yma Sumac or
something like that) who could sing five octaves with her vocal
cords.  Quick, Google search:
<www.yma-sumac.com>
   (Inca princess / Peruvian diva)
<http://en.wikipedia.org/wiki/Yma_S%C3%BAmac>
   (her extreme vocal range "well over three octaves"^[1], which
    was commonly claimed to span four and even five octaves at its
    peak^[2]^[3].)
..
   Yma Sumac recorded an extraordinarily wide vocal range of more than
   four octaves, from B[2] to C#[7] (approximately 123 to 2270 Hz). She
   was able to sing notes in the low baritone register as well as notes
   above the range of an ordinary soprano. ...
..
     * Voice of the Xtabay (1950), Capitol Records H-244 (10" LP)^[13]
     * Inca Taqui (1953), Capitol L-243 (10" LP)
     * Voice of the Xtabay, Capitol W-684 (both of the above on one 12"
       LP)
I remember that title "Voice of the Xtabay" as the record that my
parents had, but I don't remember whether it was a 10-inch or 12-inch.

> The most regular error in language design is, "Hey! Let's use it for
> /everything/!". Welcome to the club.

Yeah, Java is sorta like that, object-oriented in the strict sense
that everything must be in a Class, and every Class must therefore
define a type of structure with methods, even in the degenerate
case where no instance variables are defined hence any instances of
the class are empty of any data, and all methods are static so
there's no point in ever making an instance except possibly as a
marker, and in fact no instances of that Class are ever made
because nobody has found a use for an object of that Class as a
marker.

Lisp is so much the opposite, starting with REP with APPLY and EVAL
(and now FUNCALL), adding special forms and macros in a clean way,
fully developing procedural programming and going a decent ways
along functional programming, then introducing lexical closures and
using them to emulate various levels of continuations and lazy
evaluation and currying and OOP, and introducing a flexible reader
with dispatch characters and reader macros which allow extending
the syntax to support some classes of DSLs without needing to write
a DSP (Domain-Specific Parser) from scratch, and finally settling
on CLOS for fullfledged OOP beyond anything C++ or Java can do, and
nicely integrating *all* those modes of programming in a consistent
environment. Sure, at the time it was ANSI-standardized, it was
still missing a few important things, such as UniCode support.

> The only thing I read from your pattern obsession is that F# offers
> nothing else.

Ha ha, with friends like Harpo, Ocaml doesn't need enemies!

I hope I never post anything that degrades Common Lisp as badly as
Harpo degrades Ocaml.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: When extensibility goes bad
Date: 
Message-ID: <rem-2008jun23-004@yahoo.com>
> > This code example is so obviously tailored for OCalm type matching it
> > actually says little about actual performance on a application level.
> From: Jon Harrop <····@ffconsultancy.com>
> Pick an application, any application.

Let me jump in here and suggest an application:
You have Received lines from lots of spam.
You want to classify them according to patterns, typically per the
e-mail software that generated them, except you don't have access
to the software run by all the mail servers in the world that are
involved in sending you spam, so you can look only at the final
product, what appears in the headers of the spam you receive, and
try to guess what patterns there are in the formats you see.
What sort of algorithm would you propose for that task, indepedent
of the language you might try to program it in?
Then after we agree on that answer, I'll ask everyone here which
programming language makes implementation of that algorithm
easiest, most "obvious", most straightforward, nevermind efficiency
at this point.

Now let's say the result of the previous software task was that
there are fourteen different patterns of format of Received lines
in the corpus of spam you've been studying. Next you want to write
a parser for each of those patterns, or even better a single parser
that can simultaneously recognize the pattern and parse it. The
result of each parse would be a parse tree, whose structure is
specific to the particular pattern it's parsing. But whereever a
low-level syntax element is common to more than one high-level
pattern, you want to recognize that syntax element as "the same"
element and tag it the same way each time it appears regardless of
what pattern it appears in.
What sort of algorithm would you propose for that task, indepedent
of the language you might try to program it in?
Then after we agree on that answer, I'll ask everyone here which
programming language makes implementation of that algorithm
easiest, most "obvious", most straightforward, nevermind efficiency
at this point.

In both cases, save mention of programming languages for later.
First (now) the informal description of the two algorithms-of-choice,
with no reference to any specific programming language, only
reference to standard types of data structures, such as nested
lists, hash tables, binary search trees, etc. and algorithms that
make efficient/comprehensible use of those data structures, such as
regression analysis of local text-string statistics, compilers from
BNF tables to one-pass dispatch tables, recursive descent parsers,
etc.

> > Earlier this week I wrote some code to pretty print 100!. You
> > can look it up. Now how would that code look in OCalm or F#?

Hey, I was going to ask what you mean by that. Do you mean
expressing the numeric value in some radix and then inserting
commas every third group from the right and then inserting line
breaks (with indentation on all lines except the first, or
right-adjust to right margin all lines with first line starting
midway in the line)? Or do you mean completely factoring the
numeric value and collecting like prime factors then formatting the
factorization expression to make optimal use of lines of output? Or
what??
From: John Thingstad
Subject: Re: When extensibility goes bad
Date: 
Message-ID: <op.uc82qvqput4oq5@pandora.alfanett.no>
P� Tue, 24 Jun 2008 07:16:33 +0200, skrev Robert Maas,  
http://tinyurl.com/uh3t <··················@spamgourmet.com.remove>:

>
> Hey, I was going to ask what you mean by that. Do you mean
> expressing the numeric value in some radix and then inserting
> commas every third group from the right and then inserting line
> breaks (with indentation on all lines except the first, or
> right-adjust to right margin all lines with first line starting
> midway in the line)? Or do you mean completely factoring the
> numeric value and collecting like prime factors then formatting the
> factorization expression to make optimal use of lines of output? Or
> what??

like this (thread pretty printing, a lost art?)

(defun print-fact (n)
   (let* ((fact (fact n))
          (fact-string (format nil "~:D" fact))
          (fact-list (split-sequence #\, fact-string)))
     (format t "~&~<~{~3,,,··@A~^~4T~:_~}~:>" (list fact-list))
   (values)))

MY-USER 54 > (print-fact 100)
093 326 215 443 944 152 681 699 238 856 266 700 490 715 968 264 381 621  
468 592 963 895 217 599 993 229 915 608
941 463 976 156 518 286 253 697 920 827 223 758 251 185 210 916 864 000  
000 000 000 000 000 000 000

or

     > (let ((*print-right-margin* 40)) ; try for 10 groups per line
	   (print-fact 100))

     093 326 215 443 944 152 681 699 238 856
     266 700 490 715 968 264 381 621 468 592
     963 895 217 599 993 229 915 608 941 463
     976 156 518 286 253 697 920 827 223 758
     251 185 210 916 864 000 000 000 000 000
     000 000 000
     >

--------------
John Thingstad
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Pattern matching (was: Parallel Common-Lisp with at least 64 processors?)
Date: 
Message-ID: <rem-2008jun23-003@yahoo.com>
> > If you mean something more complicated as "pattern matching", you need to
> > say precisely what you mean.
> From: Jon Harrop <····@ffconsultancy.com>
> I was referring to ML-style pattern matching (including dynamic
> dispatch over nested algebraic data types) as seen in F#, OCaml,
> Haskell and SML.

What you said there is meaningful only to somebody who already
knows that particular part of ML or F# or OCaml or Haskell or SML
at a deep level of understanding.

To the rest of us, including myself, you seem to be evading
answering the question, and to me you appear to be somebody who
doesn't understand any of that at a deep level yourself and that's
why you refuse to say what you mean but instead make vague
references to the way it's done in various other languages. To
prove you *do* actually understand what you're talking about,
please peruse the documentation for those five languages, searching
for succinct explanations of what you're talking about, then
copy&paste the relevant text into this thread with citation of
precisely where you found the original text. If you don't find any
suitable explanation in the online documentation for *any* of those
five languages, perhaps you can find sufficient semi-explanations
that you can modify the wording to be suitable to post here.

> Most of the functionality of the symbolic simplifier I cited is
> encoded in the pattern matching:
> let rec ( +: ) f g = match f, g with
>    | `Int n, `Int m -> `Int (n +/ m)
>    | `Int (Int 0), e | e, `Int (Int 0) -> e
>    | f, `Add(g, h) -> f +: g +: h
>    | f, g -> `Add(f, g)

I've never programmed in whatever language that's supposed to be,
and don't even recognize the language. If you wrote in French or
Tagalog I'd have a better chance of at least guessing what language
it's supposed to be. Please try to communicate basic ideas, such as
what the heck you mean by "pattern matching", in ordinary English
that we can all understand. *After* you've explained the basic idea
well enough that we understand, *then* you can start showing code
examples from various languages with explanations of how the code
relates to the understanding of basic principles we already have.

> All three functions ... The values they are matching are
> algebraic datatypes and integers in this case.

What are "algebraic datatypes"? Is this what you mean?
<http://en.wikipedia.org/wiki/Algebraic_data_types>
   In computer programming, an algebraic data type is a datatype each of
   whose values is data from other datatypes wrapped in one of the
   constructors of the datatype. Any wrapped datum is an argument to the
   constructor. In contrast to other datatypes, the constructor is not
   executed and the only way to operate on the data is to unwrap the
   constructor using pattern matching.
OK, I've gotta play (deceased) George Carlin here: If you never
execute the constructor, so that it never actually constructs
anything, then isn't it a bit of an oxymoron to call it a
"constructor"? And if it's never executed, how would you know it's
a real constructor, and not just a fake that wouldn't do anything
if you were to execute it? Is it like purchasing a shrink-wrapped
software product with a warning that opening the shrink wrap voids
the warantee, and in fact a disclaimer that opening the shrink wrap
causes a self-destruct mechanism to be triggered whereby the CD-ROM
is immediately erased? Or is it like an instruction tape in
Mission Impossible, where you can carry it around unopened as long
as you want, and believe strongly that it's really an instruction
tape, but as soon as you open it you can listen to it just once to
verify it really *was* a instruction tape before it self-destructs?
Thus you can verify once, in a destructive way, that yes Virginia
it really and truly *was* an instruction tape, or a constructor?

If you can never test that the thingy really is a constructor, can
I give you something that I claim is a constructor, even though I'm
lying, it's just a symbol, or even an integer, with no special
properties, no utility whatsoever, and you are required to trust me
that it really is a constructor, and you are forbidden to ever
check if it really is a constructor? Is it like the monsters in
children's bedrooms, or mummies in Abbott&Costello movies, which
disappear whenever the parents or Abbott show up respectively?

> Run ocaml ...

% whereis ocaml
ocaml:
(rest of what you say is moot here)
From: Tim X
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <87y74y9jvm.fsf@lion.rapttech.com.au>
Jon Harrop <···@ffconsultancy.com> writes:

> John Thingstad wrote:
>> På Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop
>> <···@ffconsultancy.com>:
>> 
>> Spoken like someone who has never actually programmed Lisp.
>> 
>> Just earlier this week I made a program to compute prime numbers.
>> It found all primes < 1000000 in 1.015 seconds.
>> That is just as fast as the same algorithm in C...
>
> Yes, of course. That task is so simple that Lisp's deficiencies are not
> relevant.

And how do you know the same doesn't hold for the OP? You just came out
and fired off the "lisp is slow" argument without any knowledge of what the OP
was trying to do. For all you know, it may be something that could
perform as fast as C or it could be something that is so easily done in
CL that even if it was a little slower, it would still be easier and
more maintainable. 

Your advice reminds me of when I asked someone for directions and they
responded with "Well, if its there you want to go, I wouldn't be
starting from here!", completely pointless and of absolutely no
help. The OP wants to write his program in CL and just wanted a parallel
system to use. He wasn't asking what language he should use and probably
is even less interested in your opinion of CL and like the rest of us,
probably wonders why you bother even reading the group.

Tim

-- 
tcross (at) rapttech dot com dot au
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <8rydnV-0L_D0sMPVnZ2dneKdnZydnZ2d@posted.plusnet>
Tim X wrote:
> And how do you know the same doesn't hold for the OP?

I don't. If the OP really was writing only programs as simple as a prime
sieve then my advice is irrelevant. Otherwise, it is of paramount
importance.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: John Thingstad
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <op.uc2j12m8ut4oq5@pandora.alfanett.no>
P� Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop  
<···@ffconsultancy.com>:

Running Pascal's code:

Cl-USER 6 > (time (test 10000000))
Timing the evaluation of (TEST 10000000)

User time    =        1.328
System time  =        0.000
Elapsed time =        1.421
Allocation   = 4780 bytes
0 Page faults

That's in fact 43% faster than the OCalm program..

Of cource I use LispWorks not SBCL which seems to optimize CLOS code  
better.
(Pentium 820 Dual core 2.8MHz running Windows.)
Anyhow 10 million simplifications in a little more than a second doesn't  
strike me as particulary slow.

--------------
John Thingstad
From: Kenny
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <485c696a$0$5005$607ed4bc@cv.net>
John Thingstad wrote:
> P� Fri, 20 Jun 2008 22:19:33 +0200, skrev Jon Harrop 
> <···@ffconsultancy.com>:
> 
> Running Pascal's code:
> 
> Cl-USER 6 > (time (test 10000000))
> Timing the evaluation of (TEST 10000000)
> 
> User time    =        1.328
> System time  =        0.000
> Elapsed time =        1.421
> Allocation   = 4780 bytes
> 0 Page faults
> 
> That's in fact 43% faster than the OCalm program..
> 
> Of cource I use LispWorks not SBCL which seems to optimize CLOS code 
> better.
> (Pentium 820 Dual core 2.8MHz running Windows.)
> Anyhow 10 million simplifications in a little more than a second doesn't 
> strike me as particulary slow.
> 
> --------------
> John Thingstad

You answered Harrop? You really answered Harrop? After all this time, 
you actually really seriously answered Harrop? Ariel has an excuse, for 
you we need some more Shakespeare, not sure what... a, the solstice is 
at hand, not exactly mid, but something from A Midsummer's Night Dream 
must apply...

kt
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <lP-dnSAnaMx0TMHVnZ2dnUVZ8vmdnZ2d@posted.plusnet>
Ariel wrote:
> I thought using a modern compiler for Lisp would allow it to perform as
> well as any other standard modern day high level language?  (Or so says
> Paul Graham.)  Is this a false statement?

Here is another example: the Mersenne Twister PRNG. Comparing the Common
Lisp code here:

  http://www.cliki.net/MT19937

to implementations in other languages when computing 10^8 random integers:

C:      0.61s
F#:     1.77s
OCaml:  3.4s
SBCL:  13.6s

As you can see, Lisp is very slow on this benchmark as well, at least when
compiled with SBCL.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
From: Ariel
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <20080621051808.6eb50535.no@mail.poo>
On Sat, 21 Jun 2008 11:00:02 +0100
Jon Harrop <···@ffconsultancy.com> wrote:

> Ariel wrote:
> > I thought using a modern compiler for Lisp would allow it to perform as
> > well as any other standard modern day high level language?  (Or so says
> > Paul Graham.)  Is this a false statement?
> 
> Here is another example: the Mersenne Twister PRNG. Comparing the Common
> Lisp code here:
> 
>   http://www.cliki.net/MT19937
> 
> to implementations in other languages when computing 10^8 random integers:
> 
> C:      0.61s
> F#:     1.77s
> OCaml:  3.4s
> SBCL:  13.6s
> 
> As you can see, Lisp is very slow on this benchmark as well, at least when
> compiled with SBCL.

You only linked to a project who's main goal was consistency across platforms and implimentations over speed.  Is there a link to these benchmark comparisons?
-a
From: Jon Harrop
Subject: Re: Parallel Common-Lisp with at least 64 processors?
Date: 
Message-ID: <irednZ7TiY9GtsDVnZ2dnUVZ8tPinZ2d@posted.plusnet>
Ariel wrote:
> On Sat, 21 Jun 2008 11:00:02 +0100
> Jon Harrop <···@ffconsultancy.com> wrote:
>> C:      0.61s
>> F#:     1.77s
>> OCaml:  3.4s
>> SBCL:  13.6s
> 
> You only linked to a project who's main goal was consistency across
> platforms and implimentations over speed.

They also proudly claimed improved performance over a competitor:

  "...faster than the JMT Mersenne Twister implementation"

Moreover, the C code is consistent between platforms, the F# code runs
under .NET or Mono but the OCaml code was 64-bit only.

> Is there a link to these benchmark comparisons? -a

I just downloaded and ran Mersenne Twisters in each language except for the
F# code which is part of our commercial F# for Numerics library.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u