From: Joseph O'Rourke
Subject: CMU CL vs. CLISP?
Date: 
Message-ID: <37947b4a.0@news.smith.edu>
I am preparing to install a free Lisp for teaching Intro. to AI.
Is there any reasons to choose between CMU Common Lisp and
CLISP?  I think both will run on my primary platform (SGI Irix 6.5),
and both run on other platforms.  I cannot tell easily from the
documentation I've studied if one is more stable, thorough,
efficient, easier to interface to editors, etc., than the other.  
I would appreciate advice from those with experience.  Thanks!

From: Johan Kullstam
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <m2iu7ead4z.fsf@sophia.axel.nom>
·······@grendel.csc.smith.edu (Joseph O'Rourke) writes:

> I am preparing to install a free Lisp for teaching Intro. to AI.  Is
> there any reasons to choose between CMU Common Lisp and CLISP?  I
> think both will run on my primary platform (SGI Irix 6.5), and both
> run on other platforms.  I cannot tell easily from the documentation
> I've studied if one is more stable, thorough, efficient, easier to
> interface to editors, etc., than the other.  I would appreciate
> advice from those with experience.  Thanks!

i would recommend CLISP.  CLISP is a fairly complete common-lisp with
small memory footprint and it is easy to use (especially with built-in
gnu readline).  CLISP will run on many operating systems including
linux and windows. (not that i'm a big fan of windows, but it *is*
common.  this way students can use it easily at home.)

CMUCL is good, but it's a bit industrial strength.  the compiler is
wordy and complains a lot about type inferences and such.  this is, to
be sure, useful since CMUCL can produce fast number-crunching code,
but may be a bit overwhelming to the neophyte.  CMUCL exists for a few
popular flavors of unix and afaik does not do windows.

both CLISP and CMUCL can be run from within EMACS.  using the lisp source
editor and inferior lisp modes of EMACS make my life easier.

-- 
J o h a n  K u l l s t a m
[········@ne.mediaone.net]
Don't Fear the Penguin!
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <932571082.483197@fire-int>
Johan Kullstam (········@ne.mediaone.net) wrote:

: CMUCL is good, but it's a bit industrial strength. 

What do you mean with `industrial strength'?

I would say the opposite is true.  Clisp is being used for real
`industrial strength' projects.  I don't think this is the case with
CMUCL (but I may be wrong).

P.
From: Mark Carroll
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <CLq*O2x5n@news.chiark.greenend.org.uk>
In article <················@fire-int>,
Pierpaolo Bernardi <········@cli.di.unipi.it> wrote:
(snip)
>I would say the opposite is true.  Clisp is being used for real
>`industrial strength' projects.  I don't think this is the case with
>CMUCL (but I may be wrong).

I know of at least one commercial company that uses CMU CL for some
of its development. (XML based web database stuff, I believe...)

-- Mark
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87ogh55zp0.fsf@orion.dent.isdn.cs.tu-berlin.de>
Mark Carroll <·····@chiark.greenend.org.uk> writes:

> In article <················@fire-int>,
> Pierpaolo Bernardi <········@cli.di.unipi.it> wrote:
> (snip)
> >I would say the opposite is true.  Clisp is being used for real
> >`industrial strength' projects.  I don't think this is the case with
> >CMUCL (but I may be wrong).
> 
> I know of at least one commercial company that uses CMU CL for some
> of its development. (XML based web database stuff, I believe...)

I know of at least one commercial company that uses CMU CL for it's
factory-floor simulation software suite... ;)

While CLISP is a nice implementation, it has serious problems when you 
use it for large to huge data-sets, IMHO.

There is also another annoying little problem with CLISP:  While I
generally have little problems keeping stuff portable across most
other CL implementations, CLISP often disagrees with all other
implementations on some things[1].

Things may have changed since the last time I tried to port some
things to CLISP though, so YMMV...

Regs, Pierre.

Footnotes: 
[1]  This doesn't necessarily mean that CLISP is wrong, or non-conforming.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Bernhard Pfahringer
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7n6lkr$1774$1@www.univie.ac.at>
In article <··············@orion.dent.isdn.cs.tu-berlin.de>,
Pierre R. Mai <····@acm.org> wrote:
>
>There is also another annoying little problem with CLISP:  While I
>generally have little problems keeping stuff portable across most
>other CL implementations, CLISP often disagrees with all other
>implementations on some things[1].
>

Are you aware of the "ANSI" flag of CLISP:

 -a   ANSI CL compliant: Comply with the ANSI  CL  specifica-
      tion even on those issues where ANSI CL is broken. This
      option is provided for maximum portability of Lisp pro-
      grams. It is not useful for actual everyday work.

I've only recently discovered that flag, it can be helpful at times
(should RTFM more often :-)

Bernhard
-- 
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for  http://www.ai.univie.ac.at/~bernhard/
Artificial Intelligence          ········@ai.univie.ac.at 
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87wvvs5380.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@hummel.ai.univie.ac.at (Bernhard Pfahringer) writes:

> Are you aware of the "ANSI" flag of CLISP:
> 
>  -a   ANSI CL compliant: Comply with the ANSI  CL  specifica-
>       tion even on those issues where ANSI CL is broken. This
>       option is provided for maximum portability of Lisp pro-
>       grams. It is not useful for actual everyday work.
> 
> I've only recently discovered that flag, it can be helpful at times
> (should RTFM more often :-)

Interesting.  This seems to be a "recent" addition (well I haven't
kept up to date with CLISP to closely in recent times...).  Maybe I'll 
try to revisit CLISP for some things...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <863dyhd8gs.fsf@g.local>
Johan Kullstam wrote:

> i would recommend CLISP.  CLISP is a fairly complete common-lisp with
> small memory footprint and it is easy to use (especially with built-in
> gnu readline).  CLISP will run on many operating systems including
> linux and windows. (not that i'm a big fan of windows, but it *is*
> common.  this way students can use it easily at home.)
> 
> CMUCL is good, but it's a bit industrial strength.  the compiler is
> wordy and complains a lot about type inferences and such.  this is, to
> be sure, useful since CMUCL can produce fast number-crunching code,
> but may be a bit overwhelming to the neophyte.  CMUCL exists for a few
> popular flavors of unix and afaik does not do windows.

On the other hand, CMUCL does have a native-code compiler, and
CLISP doesn't, so if performance matters CMUCL may win. (For some
purposes CLISP will likely be *faster* than CMUCL, though; I'm
told its bignums are especially good.)

The version of CLISP I have (which is admittedly a bit old) also
doesn't grok inline functions. This, plus the fact that "built-in"
operations tend to be much faster than build-it-yourself ones,
does slightly discourage one from constructing abstractions;
that's a shame.

None of this means I don't like CLISP, by the way. It's a great
piece of software, especially in view of its ability to live in
small machines.

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <932660400.222123@fire-int>
Gareth McCaughan (················@pobox.com) wrote:
: Johan Kullstam wrote:

: > CMUCL is good, but it's a bit industrial strength.  the compiler is
: > wordy and complains a lot about type inferences and such.  this is, to
: > be sure, useful since CMUCL can produce fast number-crunching code,
: > but may be a bit overwhelming to the neophyte.  CMUCL exists for a few
: > popular flavors of unix and afaik does not do windows.

: On the other hand, CMUCL does have a native-code compiler, and
: CLISP doesn't, so if performance matters CMUCL may win.

In my understanding, `industrial strenght' means correct and supported.
 
: (For some
: purposes CLISP will likely be *faster* than CMUCL, though; I'm
: told its bignums are especially good.)

try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
and Allegro.

: The version of CLISP I have (which is admittedly a bit old) also
: doesn't grok inline functions. 

Allegro doesn't grok them either. 

In Clisp, functions are inlined by the file compiler, but not by
COMPILE (as far as I can remember, Clisp has always worked in this
way).

: This, plus the fact that "built-in"
: operations tend to be much faster than build-it-yourself ones,
: does slightly discourage one from constructing abstractions;
: that's a shame.

I don't understand this.  You are complaining that built-in fuctions
are too fast?

Should be easy to fix.  Just insert a delay in the interpreter loop
whenever a built-in function is called.  You may even make this delay
so big as to make build-it-yourself functions more convenient, thus
encouraging constructing abstractions.

P.
From: Tim Bradshaw
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ey33dygeelr.fsf@lostwithiel.tfeb.org>
* Pierpaolo Bernardi wrote:

> try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
> and Allegro.

Bignum performance is not always on the critical path for Lisp
applications.


> : This, plus the fact that "built-in"
> : operations tend to be much faster than build-it-yourself ones,
> : does slightly discourage one from constructing abstractions;
> : that's a shame.

> I don't understand this.  You are complaining that built-in fuctions
> are too fast?

No, he's complaining that the byte compiler is too *slow*, so code you
write is always much slower than anything built in.  So you are
encouraged to use the builtin types & functions rather than write your
own.  Which is what he wrote.

> Should be easy to fix.  Just insert a delay in the interpreter loop
> whenever a built-in function is called.  You may even make this delay
> so big as to make build-it-yourself functions more convenient, thus
> encouraging constructing abstractions.

Ho ho.

--tim
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933040775.19174@fire-int>
Tim Bradshaw (···@tfeb.org) wrote:
: * Pierpaolo Bernardi wrote:

: > try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
: > and Allegro.

: Bignum performance is not always on the critical path for Lisp
: applications.

And indeed I`m not concerned principally with bignum speed.  I am more
concerned about the apparent lack of care that some implementors put
in implementing such basic functions as RANDOM.  Please try my example
on ACL.

P.
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87btcyw4gc.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> And indeed I`m not concerned principally with bignum speed.  I am more
> concerned about the apparent lack of care that some implementors put
> in implementing such basic functions as RANDOM.  Please try my example
> on ACL.

Beware!  Benchmarking RANDOM is a non-trivial task, and timing
(random (expt 10 500)) is just brain-damaged, unless you can convince
me that this happens to be on the critical path of any
non-brain-damaged program (which I would find hard to believe).  And
BTW random is far from a "basic" function.  The amount of theory
contained in RANDOM is so dense, that reading up on it can take you a
couple of months.  And using RANDOM isn't simple for that very
reason.  Random numbers are even more evil in the hand of an unskilled 
user than floating point numbers.

If you want to benchmark RANDOM, first understand what you are
benchmarking.  Sadly, most implementations of any language deem
it enough to provide a simplistic (and sometimes even seriously
flawed) RNG, and not even documenting the exact algorithm and
parameters used, thereby forcing any serious user to implement
his own RNG anyway.  Comparing the performance of a flawed or
severely restricted RNG to that of a high-quality one is not in
any way meaningful (although there exist many high-quality RNGs
out there which can be quite competitive to your usual crappy
RNG).[1]

Then use realistic examples.  Where would the range argument to the
RNG be calculated afresh every time?  I can't think of any reasonable
case.

And finally you have to discern the different cases.  Whilst ACL's
random implementation can be a bit slow in the general case, the
performance can be sped up considerably, in certain special cases
(like when the range is a single float, or probably a fixnum).  If you 
contact your vendor he will be quite willing to give you the necessary 
advice needed for tuning, as usual.

But even given a very simplistic benchmarking approach, what you could
have found out about random performance in ACL, CMU CL and CLISP would 
have been the following:

*********************************************************************
          BEWARE: THIS IS NOT MEANT AS A SERIOUS BENCHMARK!
*********************************************************************

The given test-code is not really realistic, and the RNGs in question
have wildly varying characteristics.  Furthermore performance could be
heavily influenced by the addition or deletion of declarations, and/or
optimization settings, and/or the use of more modern implementation
versions, and/or the use of different architectures, and/or even
different chips of the same architecture (the AMD-K6-2/350 I used at
home for these "benchmarks" has a none too good FP unit, so
performance on serious chips with useful FP performance will be
better, and might skew the results, depending on the implementation
strategies chosen by the RNGs in question).  OS influence seems
unlikely, but can't be ruled out either.  OS in question was Linux
2.2.10.

No rigorous attempt has been made to optimize the test for any
implementation, although an attempt has been made to provide a
slightly "de-optimized" version for ACL, to show the special-casing of
1.0s0.  Since the constructs provided are very direct, the advantages
of CMU CL's type-inference mechanisms are not really utilized, thereby
putting "less intelligent" compilers at less of a disadvantage...

The run-times are really too short in most cases to provide reliable
measures, but I was to lazy to invest any more time into this silly
"benchmark".  The number of digits provided in measurements is a joke,
and should not be taken to indicate any kind of accuracy or certainty.

The RNGs in the versions of CLISP and ACL tested were (according to
documentation and/or source):

CLISP:
# Zufallszahlengenerator nach [Knuth: The Art of Computer Programming, Vol. II,
# Seminumerical Algorithms, 3.3.4., Table 1, Line 30], nach C. Haynes:
# X eine 64-Bit-Zahl. Iteration X := (a*X+c) mod m
# mit m=2^64, a=6364136223846793005, c=1.

ACL:
     If number is a fixnum or the single-float 1.0, the algorithm used
     to generate the result is Algorithm A on page 27 of Art of Computer
     Programming, volume 2, 2nd edition, by Donald Knuth (published by
     Addison-Wesley). If number is any other value, a
     linear-congruential generator using 48 bit integers for the seed
     and multiplier is used. because 48 bit integers are bignums, random
     with an argument other than a fixnum or the single-float 1.0 is
     very inefficient and not recommended.

CMU CL uses the below mentioned MT-19937 RNG.

The test-hugenum test produces an error on ACL 5.0, indicating that
#.(expt 10 500) can't be coerced to a double-float.  Therefore no
figures for test-hugenum are available on ACL.  Maybe I missed
something, but IMHO this should work.

Final Word: I really mean it!  Do not use these figures as any kind
of indication of realistic RNG performance!  If you care about RNG
performance, you should probably care more about RNG quality!  If you
still care, and are prepared to do some serious work on benchmarking
them, good luck, and bring along much time (and a PhD in a related
field of mathematics can't do any harm, either).  This silly little
demonstration is solely meant to show how wildly differing results
you can get even under very simplistic conditions, thereby
invalidating any benchmarking approach that tries to give you single
figures or value judgements.

*********************************************************************

Source:

(declaim (optimize (speed 3)))

(defun test-dfloat (n)
  (dotimes (i n)
    (random 1.0d0)))

(defun test-sfloat-var (n)
  (let ((range 1.0s0))
    (dotimes (i n)
      (random range))))

(defun test-sfloat (n)
  (dotimes (i n)
    (random 1.0s0)))

(defun test-bignum (n)
  (dotimes (i n)
    (random #.(expt 2 100))))

(defun test-hugenum (n)
  (dotimes (i n)
    (random #.(expt 10 500))))


*********************************************************************

Results of (time (test-hugenum 100000)):

Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1	4182		21600000	110
CMU CL CVS 2.4.9	90270		1495314816	38120
ACL TE Linux 5.0	-		-		-

Results of (time (test-bignum 100000)):

Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1	730		2387272		30
CMU CL CVS 2.4.9	3260		30312608	820
ACL TE Linux 5.0	5653		67001000	480

Results of (time (test-sfloat 1000000)):

Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1	3896		0		0
CMU CL CVS 2.4.9	220		0		0
ACL TE Linux 5.0	922		32		0

Results of (time (test-sfloat-var 1000000)):

Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1	3895		0		0
CMU CL CVS 2.4.9	220		0		0
ACL TE Linux 5.0	2785		16000032	80

Results of (time (test-dfloat 1000000)):

Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1	6481		52000000	610
CMU CL CVS 2.4.9	290		0		0
ACL TE Linux 5.0	27729		399999984	2310

Regs, Pierre.

Footnotes: 
[1]  This is one of many reasons I like CMU CL so much:  The RNG of
CMU CL is currently a Mersenne-Twister Generator (MT-19937) with a
period of 2^19937-1 and 623-dimensional equidistribution.  The
algorithm has been published together with the usual test results in
the ACM Transactions on Modelling and Computer Simulation (TOMACS),
Issue 1/1998, pp. 3-30, by Makoto Matsumoto and Takuji Nishimura.
This is actually where I came across the RNG, and when I decided to
implement it in CL, I found out that it had already been implemented
for CMU CL (together with a considerably bummed implementation for
x86).  Together with the necessary references to the paper.  I was
severely impressed (thanks to Raymond Toy and Douglas T. Crosher who
seem responsible for this ;).  The performance of this is very nice in 
non-bignum cases.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4n9081x1q0.fsf@rtp.ericsson.se>
>>>>> "Pierre" == Pierre R Mai <····@acm.org> writes:

    Pierre> Results of (time (test-hugenum 100000)):

    Pierre> Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
    Pierre> -------------------------------------------------------------------
    Pierre> CLISP 1997-12-06-1	4182		21600000	110
    Pierre> CMU CL CVS 2.4.9	90270		1495314816	38120
    Pierre> ACL TE Linux 5.0	-		-		-

    Pierre> Results of (time (test-bignum 100000)):

    Pierre> Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
    Pierre> -------------------------------------------------------------------
    Pierre> CLISP 1997-12-06-1	730		2387272		30
    Pierre> CMU CL CVS 2.4.9	3260		30312608	820
    Pierre> ACL TE Linux 5.0	5653		67001000	480


I think I know the reason for the relatively slow results for CMUCL.
The generator in this case creates the bignum by essentially
overlapping a bunch of 32-bit random integers by 3 bits.  The intent
is to enhance the randomness of the least significant bits.  However,
the MT-19937 generator is supposed to have good randomness for the
entire 32 bits.  If we truly concatenate the 32-bit numbers together,
we get results like this (on a Ultrasparc II, 300 MHz):

Results of (time (test-hugenum 100000)):

Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1999-05-15	12324		21600000	205
CMU CL 18b+		4940		90899784	840

Results of (time (test-bignum 100000)):

Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1999-05-15	1626		 2400000	 46
CMU CL 18b+		1510		15404520	160

To calibrate the results, unmodified CMUCL gives a time of 2160 ms for
the bignum test, so this 300MHz Ultra 30 is about twice the speed of
your K6-2/350.  

To confuse matters more, the time for test-sfloat is 410 ms compared
to your 220 ms.  So your floating point isn't so shabby.  The
difference perhaps is due to the fact that the x86 port uses an
assembly version for the mt-19937 generator and the sparc uses Lisp.
And, the x86 version of CLISP appears to be much faster than the sparc 
version.

Also 1.0s0 is a short float, which is not a single-float.  This
doesn't matter for CMUCL or ACL, but it does for CLISP which does
have true short floats:

Results of (time (test-sfloat 1000000)):

Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1999-05-15	4559		0		0
CMU CL 18b+		 410		48		0

Results of (time (test-ffloat 1000000)):

Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP 1999-05-15	6674		48000000	796


Isn't benchmarking fun? :-) Making sense of the results is a lot of
fun too! :-)

Ray
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87yag1dbmb.fsf@orion.dent.isdn.cs.tu-berlin.de>
Raymond Toy <···@rtp.ericsson.se> writes:

[ BTW: Sorry for the tabs in the original post, I forgot to strip them 
  prior to posting... ]

> I think I know the reason for the relatively slow results for CMUCL.
> The generator in this case creates the bignum by essentially
> overlapping a bunch of 32-bit random integers by 3 bits.  The intent
> is to enhance the randomness of the least significant bits.  However,
> the MT-19937 generator is supposed to have good randomness for the
> entire 32 bits.  If we truly concatenate the 32-bit numbers together,
> we get results like this (on a Ultrasparc II, 300 MHz):

Lovely!  Another set of inconsistent results, demonstrating the
silliness of simplistic benchmarking even further! ;)

> To confuse matters more, the time for test-sfloat is 410 ms compared
> to your 220 ms.  So your floating point isn't so shabby.  The
> difference perhaps is due to the fact that the x86 port uses an
> assembly version for the mt-19937 generator and the sparc uses Lisp.

Yes, the assembly version of the mt-19937 on x86 will probably drown
out most other factors.  And even while generating FPs, most work is
still done in the state update operation of mt-rand19937, which uses
integer operations only, which means that FP performance is not a
major factor in this.  So I still think that a 300MHz Ultrasparc II
gives better FP performance than a 350MHz AMD K6-2.

> And, the x86 version of CLISP appears to be much faster than the sparc 
> version.

See below for a more direct comparison.  Since my CLISP seems to cons
only half of your CLISP, it seems to me we are using versions with
different representations.  This might be because I used the "small"
version of CLISP, which uses a 24+8 bit representation (IIRC), and you
used the "wide" version which uses 64 bits (again IIRC).  Or it might
be because of other differences in representation between x86 Linux
and UltraSparcII versions of CLISP.

> Also 1.0s0 is a short float, which is not a single-float.  This
> doesn't matter for CMUCL or ACL, but it does for CLISP which does
> have true short floats:

Oops, yes, thanks for spotting this.  Never benchmark when not 100%
concentrated.  So here is a short comparison between short and single 
float performance of CLISP (both with (time (test-* 1000000))):

Implementation		Real-Time(ms)	Consing(Bytes)	GC-Time(ms)
-------------------------------------------------------------------
CLISP short-float	4027		0		0
CLISP single-float	5097		24000000	360

> Results of (time (test-sfloat 1000000)):
> 
> Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
> -------------------------------------------------------------------
> CLISP 1999-05-15	4559		0		0
> CMU CL 18b+		 410		48		0
> 
> Results of (time (test-ffloat 1000000)):
> 
> Implementation		Run-Time(ms)	Consing(Bytes)	GC-Time(ms)
> -------------------------------------------------------------------
> CLISP 1999-05-15	6674		48000000	796

> Isn't benchmarking fun? :-) Making sense of the results is a lot of
> fun too! :-)

Yes, silly-benchmarking is kind of addictive, like micro-optimizing.
And it's like standards and statistics:  So many answers to choose
from.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379ED4D3.221651C1@iname.com>
Raymond Toy wrote:

> I think I know the reason for the relatively slow results for CMUCL.
> The generator in this case creates the bignum by essentially
> overlapping a bunch of 32-bit random integers by 3 bits.  The intent
> is to enhance the randomness of the least significant bits.  However,
> the MT-19937 generator is supposed to have good randomness for the
> entire 32 bits.  If we truly concatenate the 32-bit numbers together,

What is "good"? Unless you are comparing the same distribution, or can say that the one
that `is more random' is faster, it's like comparing apples and oranges..

http://random.mat.sbg.ac.at/
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <873dy9dqef.fsf@orion.dent.isdn.cs.tu-berlin.de>
Fernando Mato Mira <········@iname.com> writes:

> Raymond Toy wrote:
> 
> > I think I know the reason for the relatively slow results for CMUCL.
> > The generator in this case creates the bignum by essentially
> > overlapping a bunch of 32-bit random integers by 3 bits.  The intent
> > is to enhance the randomness of the least significant bits.  However,
> > the MT-19937 generator is supposed to have good randomness for the
> > entire 32 bits.  If we truly concatenate the 32-bit numbers together,
> 
> What is "good"? Unless you are comparing the same distribution, or
> can say that the one that `is more random' is faster, it's like
> comparing apples and oranges..

What exactly is your point here?  Raymond was making observations
about the implementation of bignum random number generation from the
32-bit RNs that are generated by the MT-19937 "primitive" generator in 
CMU CL.  His observation was that the technique used was sub-optimal,
since it tried to counter a problem in the usual simplistic RNGs (the
problem of "little randomness" in the least significant bits), a
problem that is not known to be present in MT-19937.

If you really wanted to know the effects of the absence (_or presence_)
of this technique on the quality of the generated bignums,  you'd have 
to go through the usual theoretical and statistical tests.  Since I
assume that these tests haven't been run on the bignum generator as it 
stands, you gain nor lose nothing here.  But given that the least
significand bits of MT-19937 have been examined rigorously, and have
exhibited equally good results for the usual tests than other bits, it 
seems theoretically sound to change the algorithm as proposed.

For further information see the original article on MT-19937, published
by Matsumoto and Nishimura in ACM TOMACS 1/1999, p. 3-30.  MT-19937 has
also been recomended for RNG in a number of introductory papers on RNG
for simulation use.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379F011E.7C9D95C6@iname.com>
"Pierre R. Mai" wrote:

> Fernando Mato Mira <········@iname.com> writes:
>
> > Raymond Toy wrote:
> >
> > > I think I know the reason for the relatively slow results for CMUCL.
> > > The generator in this case creates the bignum by essentially
> > > overlapping a bunch of 32-bit random integers by 3 bits.  The intent
> > > is to enhance the randomness of the least significant bits.  However,
> > > the MT-19937 generator is supposed to have good randomness for the
> > > entire 32 bits.  If we truly concatenate the 32-bit numbers together,
> >
> > What is "good"? Unless you are comparing the same distribution, or
> > can say that the one that `is more random' is faster, it's like
> > comparing apples and oranges..
>
> What exactly is your point here?  Raymond was making observations
> about the implementation of bignum random number generation from the
> 32-bit RNs that are generated by the MT-19937 "primitive" generator in
> CMU CL.  His observation was that the technique used was sub-optimal,
> since it tried to counter a problem in the usual simplistic RNGs (the
> problem of "little randomness" in the least significant bits), a
> problem that is not known to be present in MT-19937.
>
> If you really wanted to know the effects of the absence (_or presence_)
> of this technique on the quality of the generated bignums,  you'd have
> to go through the usual theoretical and statistical tests.  Since I
> assume that these tests haven't been run on the bignum generator as it
> stands, you gain nor lose nothing here.  But given that the least
> significand bits of MT-19937 have been examined rigorously, and have
> exhibited equally good results for the usual tests than other bits, it
> seems theoretically sound to change the algorithm as proposed.

Nothing of this is obvious from the above. But the main point is that a slower
RNG might work in a case where a faster one doesn't, and the idea of having 1
canonical function called `RANDOM' is pretty dangerous in the hands of the
noninitiated as evidenced by the issue that triggered this discussion.
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nwvvkvoxw.fsf@rtp.ericsson.se>
>>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:

    Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
    Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
    Fernando> noninitiated as evidenced by the issue that triggered this discussion.

Why is one canonical RANDOM bad?  No one seems to complain there's
just one canonical function "COS".

(cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps.  According to
one of Kahan's papers, the result should be -0.9258790228548379d0.
(CMUCL sparc but not x86 returns this answer because the libc
implementation does this.)

It seems to me that this is a quality of implementation issue.  If the 
implementation has a good well-tested RANDOM function, then that will
satisfy just about everyone.  For the few where it won't, they'll have 
to roll there own.  The same can be said for COS too, though.

Ray
From: William Tanksley
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <slrn7pv0pm.3bh.wtanksle@dolphin.openprojects.net>
On 28 Jul 1999 10:17:31 -0400, Raymond Toy wrote:
>>>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:

>    Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
>    Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
>    Fernando> noninitiated as evidenced by the issue that triggered this discussion.

>Why is one canonical RANDOM bad?  No one seems to complain there's
>just one canonical function "COS".

Because there _is_ only one COS function.  There are a large number of
possible RANDOM functions, and almost all of them are very bad, and most
of the remaining ones are bad for most purposes.

>(cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps.  According to
>one of Kahan's papers, the result should be -0.9258790228548379d0.
>(CMUCL sparc but not x86 returns this answer because the libc
>implementation does this.)

Then the COS function on those Lisps is buggy for that value.  No problem.

>It seems to me that this is a quality of implementation issue.  If the 
>implementation has a good well-tested RANDOM function, then that will
>satisfy just about everyone.  For the few where it won't, they'll have 
>to roll there own.  The same can be said for COS too, though.

Needs for RANDOM differ according to use.  Many games and tests need
repeatability, so an rng with seed extraction is best.  General crypto
needs a huge period and a vast number of seeds, so seed extraction is not
so nice.  OTP crypto can't have repeatability or seed extraction (of
course, it can't be done in software and there's no proven way to do it
anyhow else).

That's only two fields, without even considering performance requirements.

>Ray

-- 
-William "Billy" Tanksley
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nd7xbvahe.fsf@rtp.ericsson.se>
>>>>> "William" == William Tanksley <········@dolphin.openprojects.net> writes:

    >> (cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps.  According to
    >> one of Kahan's papers, the result should be -0.9258790228548379d0.
    >> (CMUCL sparc but not x86 returns this answer because the libc
    >> implementation does this.)

    William> Then the COS function on those Lisps is buggy for that
    William> value.  No problem.

There are valid reasons for disagreeing with Kahan, so calling them
buggy is questionable too.

Ray
From: Johan Kullstam
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <m2hfmo5ocd.fsf@sophia.axel.nom>
Raymond Toy <···@rtp.ericsson.se> writes:

> >>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:
> 
>     Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
>     Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
>     Fernando> noninitiated as evidenced by the issue that triggered this discussion.
> 
> Why is one canonical RANDOM bad?  No one seems to complain there's
> just one canonical function "COS".

there is only one COS function.  

in contrast there are a plethora of random number generators (RNGs).

all software RNGs are algorithmically driven psuedo-random number
generators.  the values only *look random*.  and they only look random
or not depending on how you look at them!

different people need different things from their RNG.  in my modem
simulations i do at work, the program typically spends 10-25% of its
time in the RNG itself.  since i fire off long simulations (lasting
hours to weeks), i've gotten a quick generator with enough randomness
and bummed the stuffing out of it.  (it's written in C++ (no flames
please) and based on an algorithm by knuth.  numerical recipies calls
it ran3.  if anyone wants it i am happy to share.)

one thing CL could do better would be to offer *multiple* (no less
than 3) random number generators *with explicit and thorough
documentation and repeatability across architechture and vendor*.
undocumented and unknown RNGs are *worthless* imho.  anyone who has
done much work with RNGs knows (through pain, suffering and much
gnashing of teeth) one size does not fit all.  that way you could be
assured of portability and consistency along with freedom to choose a
suitable RNG for your application.

in RNGs many things are important
0) repeatability (same seed => same results)
1) independence of samples
2) seed complexity (scalar versus complex structure such as an array)
3) speed

note that
0) and 1) are obviously contradictory.
and that
1) RNGs are 100% predictable yet it must appear to be random
independent in the application

everything is a trade-off.  what appears to be independent in one
application may not be in a another statistical test.  speed may or
may not be crucial.

> (cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps.  According to
> one of Kahan's papers, the result should be -0.9258790228548379d0.
> (CMUCL sparc but not x86 returns this answer because the libc
> implementation does this.)
> 
> It seems to me that this is a quality of implementation issue.  If the 
> implementation has a good well-tested RANDOM function, then that will
> satisfy just about everyone.

no, it will satisfy only the naive.

> For the few where it won't, they'll have to roll there own.

is this the scheme answer?

> The same can be said for COS too, though.

not really.

-- 
J o h a n  K u l l s t a m
[········@ne.mediaone.net]
Don't Fear the Penguin!
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379F2188.E1686691@iname.com>
Raymond Toy wrote:

> >>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:
>
>     Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
>     Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
>     Fernando> noninitiated as evidenced by the issue that triggered this discussion.
>
> Why is one canonical RANDOM bad?  No one seems to complain there's
> just one canonical function "COS".

But doesn't an optimal approximation to COS at a given precision `exist'? Can there be an
optimal `RANDOM'?
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nu2qovkzq.fsf@rtp.ericsson.se>
>>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:

    Fernando> But doesn't an optimal approximation to COS at a given
    Fernando> precision `exist'? Can there be an optimal `RANDOM'?

The existence of an optimal approximation doesn't mean the
implementation actually does this or is even willing to do this.  And
optimal needs to be defined, so there can be an "optimal" RANDOM, for
an appropriately chosen definition of optimal.

Ray
From: Stig Hemmer
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ekvbtcwbl9r.fsf@epoksy.pvv.ntnu.no>
Raymond Toy <···@rtp.ericsson.se> writes:
> The existence of an optimal approximation doesn't mean the
> implementation actually does this or is even willing to do this.  And
> optimal needs to be defined, so there can be an "optimal" RANDOM, for
> an appropriately chosen definition of optimal.

Well, the problem is that different peoples definitions of "optimal"
won't match up.  Won't even be compatible.

E.g. One person will only be satisfied by hardware-generated true
random bits.  Another person prices execution speed above all.

These two people will never agree on which is the optimal RANDOM.

With COS, on the other hand, people are much more likely to agree on
what an optimal implementation should do.  If not totally, then at
least enough to be satisfied by the same implementation.

Recommended reading: The chapter on random numbers in Donald E. Knuths
book "The Art of Computer Programming"

Stig Hemmer,
Jack of a Few Trades.
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nr9lsv6av.fsf@rtp.ericsson.se>
>>>>> "Stig" == Stig Hemmer <····@pvv.ntnu.no> writes:

    Stig> Raymond Toy <···@rtp.ericsson.se> writes:
    >> The existence of an optimal approximation doesn't mean the
    >> implementation actually does this or is even willing to do this.  And
    >> optimal needs to be defined, so there can be an "optimal" RANDOM, for
    >> an appropriately chosen definition of optimal.

    Stig> Well, the problem is that different peoples definitions of "optimal"
    Stig> won't match up.  Won't even be compatible.

    Stig> E.g. One person will only be satisfied by hardware-generated true
    Stig> random bits.  Another person prices execution speed above all.

    Stig> These two people will never agree on which is the optimal RANDOM.

My reply was rather flippant, and I should have included a smiley as I 
should have done.

:-)

    Stig> With COS, on the other hand, people are much more likely to agree on
    Stig> what an optimal implementation should do.  If not totally, then at
    Stig> least enough to be satisfied by the same implementation.

This is circular.  You allow COS to work for some definition of
optimal for most people, but not RANDOM.  Makes no sense to me.
Granted, there are probably many more definitions of "optimal" for
RANDOM than for COS.

    Stig> Recommended reading: The chapter on random numbers in Donald E. Knuths
    Stig> book "The Art of Computer Programming"

Done so.  Several times.

Ray
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <L8Nn3.59468$AU3.1501259@news2.giganews.com>
On 28 Jul 1999 17:00:08 -0400, Raymond Toy <···@rtp.ericsson.se> wrote:
>>>>>> "Stig" == Stig Hemmer <····@pvv.ntnu.no> writes:
>    Stig> With COS, on the other hand, people are much more likely to agree on
>    Stig> what an optimal implementation should do.  If not totally, then at
>    Stig> least enough to be satisfied by the same implementation.
>
>This is circular.  You allow COS to work for some definition of
>optimal for most people, but not RANDOM.  Makes no sense to me.
>Granted, there are probably many more definitions of "optimal" for
>RANDOM than for COS.

COS is a function that has the property that for any given input, x,
one may expect to get a consistent, repeatable output, cos(x).  There
should be a single, unambiguous value.

RANDOM does not have this property, and indeed is not a function.

It has the property that you want to get "relatively unpredictable"
values back.  Knuth describes the sorts of desirable properties in
greater detail.

The fact that one is a mathematical function, and the other is *not,*
provides ample evidence to my mind that they are permitted to behave
quite differently.

>Stig> Recommended reading: The chapter on random numbers in Donald
>Stig> E. Knuth's book "The Art of Computer Programming"
>
>Done so.  Several times.

And you're not clear on the fact that there's no forcibly "optimal"
RNG?  I'm disappointed...
-- 
If I could put Klein in a bottle...
········@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4niu73vb7u.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher Browne <········@news.hex.net> writes:

    Christopher> The fact that one is a mathematical function, and the other is *not,*
    Christopher> provides ample evidence to my mind that they are permitted to behave
    Christopher> quite differently.

Is not  

X(n) = [a*X(n-1) + b] mod c

not a well-defined mathematical statement?  Is any less well-defined
than, say

cos(x) = Re(e^(j*x))?

I wish I can recall my flippant comment, but I can't.

Everyone wants optimal, but no one defines what optimal means.  My
comment was that if you don't say what it means, I'm going to use my
own definition.

Ray
From: Christopher B. Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <slrn7q0mnh.c39.cbbrowne@knuth.brownes.org>
On 29 Jul 1999 09:26:13 -0400, Raymond Toy <···@rtp.ericsson.se> posted:
>>>>>> "Christopher" == Christopher Browne <········@news.hex.net> writes:
>
>    Christopher> The fact that one is a mathematical function, and the other is *not,*
>    Christopher> provides ample evidence to my mind that they are permitted to behave
>    Christopher> quite differently.
>
>Is not  
>
>X(n) = [a*X(n-1) + b] mod c
>
>not a well-defined mathematical statement?  Is any less well-defined
>than, say
>
>cos(x) = Re(e^(j*x))?
>
>I wish I can recall my flippant comment, but I can't.
>
>Everyone wants optimal, but no one defines what optimal means.  My
>comment was that if you don't say what it means, I'm going to use my
>own definition.

If I had meant to say "well-defined mathematical statement," I might
have done so.  I said, and *intended,* the word "function."

Your statement of X(n) may be a well-defined recurrence equation; it
is *not* a function.

if (cos 3.141) returns -0.9999998, then we can quite unambiguously say
whether that's right or wrong.

If (random 145) returns 122, does that represent a member of a random
sequence?  I dunno.  You dunno.  Everybody dunno.

"Optimal" is not a sensible term to use in this context; the issue in
picking a RNG is whether or not the RNG in question satisfies
requirements or not.
- Does it have a long enough period?
- Does it need to be repeatable?
- Does it need to be cryptographically strong?  Or not?
- Does it require maintaining a large amount of state information?  Or
  not? 

Those are the sorts of issues that are used to select a RNG; with the
number of dimensions there, no objectively "optimal" choice is
*possible.*

-- 
The first cup of coffee recapitulates phylogeny.
········@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4naesfv5se.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher B Browne <········@news.brownes.org> writes:


    Christopher> Your statement of X(n) may be a well-defined recurrence equation; it
    Christopher> is *not* a function.

It defines a function that maps an integer n to an integer.

    Christopher> if (cos 3.141) returns -0.9999998, then we can quite unambiguously say
    Christopher> whether that's right or wrong.

    Christopher> If (random 145) returns 122, does that represent a member of a random
    Christopher> sequence?  I dunno.  You dunno.  Everybody dunno.

If a, b, and c (and initial starting value for X), were given, we
could.  Perhaps not easily, but possible.

The main difference is that random has a state that is not usually
exposed, and the argument to random isn't used the same way as it is
in cos.

    Christopher> Those are the sorts of issues that are used to select a RNG; with the
    Christopher> number of dimensions there, no objectively "optimal" choice is
    Christopher> *possible.*

Optimality is never objective.  You subjectively choose what optimal
means and then demonstrate that something satifies your definition of
optimal.

Ray
From: David Thornley
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <Xzpo3.1537$X97.238839@ptah.visi.com>
In article <··············@rtp.ericsson.se>,
Raymond Toy  <···@rtp.ericsson.se> wrote:
>>>>>> "Christopher" == Christopher B Browne <········@news.brownes.org> writes:
>
>    Christopher> if (cos 3.141) returns -0.9999998, then we can quite unambiguously say
>    Christopher> whether that's right or wrong.
>
>    Christopher> If (random 145) returns 122, does that represent a member of a random
>    Christopher> sequence?  I dunno.  You dunno.  Everybody dunno.
>
>If a, b, and c (and initial starting value for X), were given, we
>could.  Perhaps not easily, but possible.
>
No, we could say if it is a member of a sequence generated by a certain
generator.  That is not the same thing.

Given any number, we can agree on what its cosine should be.  We can
calculate it.  Any given implementation of COS will return a value,
and we can objectively say "That's off" or "That's accurate to six
decimal places" or whatever.

It doesn't matter what the implementation of COS is.  We usually don't
know exactly how a Lisp system implements it.  We can still make
statements about whether it is accurate or inaccurate in given cases.
It's a well-defined mathematical function.

Randomness isn't well-defined.  There is no canonical answer to
(RANDOM 145), and there is a canonical answer to (COS 145).

>The main difference is that random has a state that is not usually
>exposed, and the argument to random isn't used the same way as it is
>in cos.
>
No, the main difference is that cos is well-defined and random isn't.

>
>Optimality is never objective.  You subjectively choose what optimal
>means and then demonstrate that something satifies your definition of
>optimal.
>
Sure.  One implementation of COS may be faster and less accurate
than another one, and which is optimal is a matter of opinion.
Evaluating a random-number generator is far more difficult.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: ········@cc.hut.fi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <m33dy3sqv4.fsf@mu.tky.hut.fi>
> Given any number, we can agree on what its cosine should be.  We can
> calculate it.  Any given implementation of COS will return a value,
> and we can objectively say "That's off" or "That's accurate to six
> decimal places" or whatever.

I suppose the reason COS was dragged into this discussion is that for
relatively large floating point numbers there is no single obviously
correct value for COS to return.  A useful definition is to make COS
return the value that minimizes the maximum error over the range of real
(or complex) numbers that are represented by the floating point argument
(in the sense that the argument is the closest representable number).

But as the absolute value of the argument grows, the minimized maximum
error grows as well.  For complex arguments, the error grows arbitrarily
large.  At what point should COS return NAN or signal an error?

Hannu Rummukainen
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <206o3.60424$AU3.1550246@news2.giganews.com>
On 29 Jul 1999 11:23:29 -0400, Raymond Toy <···@rtp.ericsson.se> wrote:
>>>>>> "Christopher" == Christopher B Browne
<········@news.brownes.org> writes: 
>> Your statement of X(n) may be a well-defined recurrence
>> equation; it is *not* a function.
>
>It defines a function that maps an integer n to an integer.

No, it maps *two* integers to an integer.  It requires a minimum of
*two* inputs.

And when called as (random modulus), it certainly does not behave in a
functional manner; it has hidden state.

>> if (cos 3.141) returns -0.9999998, then we can quite unambiguously say
>> whether that's right or wrong.
>
>> If (random 145) returns 122, does that represent a member of a random
>> sequence?  I dunno.  You dunno.  Everybody dunno.
>
>If a, b, and c (and initial starting value for X), were given, we
>could.  Perhaps not easily, but possible.

That assumes that the RNG sitting behind (random 145) is a linear
congruential generator.  Not all RNGs are linear congruential.

>The main difference is that random has a state that is not usually
>exposed, and the argument to random isn't used the same way as it is
>in cos.
>
>> Those are the sorts of issues that are used to select a RNG; with the
>> number of dimensions there, no objectively "optimal" choice is
>> *possible.*
>
>Optimality is never objective.  You subjectively choose what optimal
>means and then demonstrate that something satisfies your definition of
>optimal.

Suppose I want to use Lisp, and have a RNG that is accessed using
(RANDOM 145).

What are we to do?  

- Is it appropriate to define a Standard, predictable RNG, much as
  FORTRAN long ago defined URAND?  (Which had a few problems...)

- Should we fixate on linear congruential RNGs, as has been the case
  thus far?

- Is it appropriate to provide a Black Box which I can seed but not
  necessarily expect to predict, with documented behaviour?

- Should it be an error for a CL implementation to provide a RNG that
  uses a modern RNG function?

What is the optimal answer to *standardize* on?
-- 
I called that number and they said whom the Lord loveth he chasteneth.
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/langlisp.html>
From: Vassil Nikolov
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <l03130300b3c7bf90478f@195.138.129.108>
Raymond Toy wrote:                [1999-07-30 10:02 -0400]

  [discussing random number generation]
  > Anyway, this no longer has anything to do with lisp, so if you want to 
  > continue this, we should take it offline.

This may have nothing to do with Lisp but IMHO it has something to
do with what interests Lisp programmers, therefore, again IMHO, it is
on-topic does not necessarily have to be taken offline.


Vassil Nikolov
Permanent forwarding e-mail: ········@poboxes.com
For more: http://www.poboxes.com/vnikolov
  Abaci lignei --- programmatici ferrei.
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <37A30086.57465A67@iname.com>
Vassil Nikolov wrote:

> Raymond Toy wrote:                [1999-07-30 10:02 -0400]
>
>   [discussing random number generation]
>   > Anyway, this no longer has anything to do with lisp, so if you want to
>   > continue this, we should take it offline.
>
> This may have nothing to do with Lisp but IMHO it has something to
> do with what interests Lisp programmers, therefore, again IMHO, it is
> on-topic does not necessarily have to be taken offline.

It's more than `an interest'. Obviously there are two different viewpoints at
work here: power number crunchers with performance interests, and
language/software engineering people with consistency interests.
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4n4simutge.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher Browne <········@news.hex.net> writes:

    Christopher> On 29 Jul 1999 11:23:29 -0400, Raymond Toy <···@rtp.ericsson.se> wrote:
    >>>>>>> "Christopher" == Christopher B Browne
    Christopher> <········@news.brownes.org> writes: 
    >>> Your statement of X(n) may be a well-defined recurrence
    >>> equation; it is *not* a function.
    >> 
    >> It defines a function that maps an integer n to an integer.

    Christopher> No, it maps *two* integers to an integer.  It requires a minimum of
    Christopher> *two* inputs.

And yet, it's still a function.

    Christopher> And when called as (random modulus), it certainly does not behave in a
    Christopher> functional manner; it has hidden state.

As I mentioned in my previous message.  But actually, the state isn't
really hidden.  random takes an optional state argument that defaults
to *random-state*.

    >>> If (random 145) returns 122, does that represent a member of a random
    >>> sequence?  I dunno.  You dunno.  Everybody dunno.
    >> 
    >> If a, b, and c (and initial starting value for X), were given, we
    >> could.  Perhaps not easily, but possible.

    Christopher> That assumes that the RNG sitting behind (random 145) is a linear
    Christopher> congruential generator.  Not all RNGs are linear congruential.

Indeed.  But the principle holds.  Every pseudo-random generator is an
algorithm, so I could determine if 122 is a member of the sequence, if
you give me the starting state and generator.  May not be easy or
fast, but possible.

    Christopher> Suppose I want to use Lisp, and have a RNG that is accessed using
    Christopher> (RANDOM 145).

    Christopher> What are we to do?  

    Christopher> - Is it appropriate to define a Standard, predictable RNG, much as
    Christopher>   FORTRAN long ago defined URAND?  (Which had a few problems...)

    Christopher> - Should we fixate on linear congruential RNGs, as has been the case
    Christopher>   thus far?

    Christopher> - Is it appropriate to provide a Black Box which I can seed but not
    Christopher>   necessarily expect to predict, with documented behaviour?

    Christopher> - Should it be an error for a CL implementation to provide a RNG that
    Christopher>   uses a modern RNG function?

    Christopher> What is the optimal answer to *standardize* on?

As mentioned, the definition of optimal is subjective.  I never said
it would be easy to agree on a single definition.  :-)

Even for an approximation for cos, optimal is subjective.  Do we want
an L1 approximation?  L2?  L-infinity?  Some other weighted error
approximation?  Over what range?  Once you've decided this, you can
find an optimal approximation.

Anyway, this no longer has anything to do with lisp, so if you want to 
continue this, we should take it offline.

Ray
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <37A1C046.2BA84E6E@iname.com>
Raymond Toy wrote:

> Even for an approximation for cos, optimal is subjective.  Do we want
> an L1 approximation?  L2?  L-infinity?  Some other weighted error

I'd been thinking about two options:

1. Return the closest value in the, say,  (-Inf,-Inf) direction [loses 1 bit of accuracy].
2. Return two values: the closest one, and the position relative to the actual value.

That is, the result is actually an interval.

Now, and interesting question is whether there're alternatives with the same accuracy as 1)
that cause computations to converge in cases where 1) does not.

There could be an `unsafe' FAST-COS (or whatever), or a switch to enable an alternative
implementation (something analogous to (safety 0)).
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nwvvit7e6.fsf@rtp.ericsson.se>
>>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:

    Fernando> Raymond Toy wrote:
    >> Even for an approximation for cos, optimal is subjective.  Do we want
    >> an L1 approximation?  L2?  L-infinity?  Some other weighted error

    Fernando> I'd been thinking about two options:

    Fernando> 1. Return the closest value in the, say, (-Inf,-Inf)
    Fernando>    direction [loses 1 bit of accuracy].

For a real-valued function, what does (-Inf, -Inf) direction mean?  Do 
you mean just return the closest value, rounded down by 1 bit?

    Fernando> 2. Return two values: the closest one, and the position
    Fernando>    relative to the actual value.

What?

    Fernando> That is, the result is actually an interval.

Interval routines already exist.

    Fernando> Now, and interesting question is whether there're
    Fernando> alternatives with the same accuracy as 1) that cause
    Fernando> computations to converge in cases where 1) does not.

What computations?  Since 1 rounds down, you've introduced a bias in
your computations, which can probably be exploited to make your
computations diverge.

    Fernando> There could be an `unsafe' FAST-COS (or whatever), or a
    Fernando> switch to enable an alternative implementation
    Fernando> (something analogous to (safety 0)).

Well, you can always do this yourself, so what's the point?

Ray
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <37A2FEE9.4FE9BAD4@iname.com>
Raymond Toy wrote:

> >>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:
>
>     Fernando> Raymond Toy wrote:
>     >> Even for an approximation for cos, optimal is subjective.  Do we want
>     >> an L1 approximation?  L2?  L-infinity?  Some other weighted error
>
>     Fernando> I'd been thinking about two options:
>
>     Fernando> 1. Return the closest value in the, say, (-Inf,-Inf)
>     Fernando>    direction [loses 1 bit of accuracy].
>
> For a real-valued function, what does (-Inf, -Inf) direction mean?  Do
> you mean just return the closest value, rounded down by 1 bit?

Yes.

>     Fernando> 2. Return two values: the closest one, and the position
>     Fernando>    relative to the actual value.
>
> What?

The second value can be the other corner of the interval (that's probably better

than :

>     Fernando> That is, the result is actually an interval.
>
> Interval routines already exist.

Where? Standalone or built on what?

I don't know if this was an answer to 2). [1) Gives you an interval encoded as a
single number (you already know that)]

>     Fernando> Now, and interesting question is whether there're
>     Fernando> alternatives with the same accuracy as 1) that cause
>     Fernando> computations to converge in cases where 1) does not.
>
> What computations?  Since 1 rounds down, you've introduced a bias in
> your computations, which can probably be exploited to make your
> computations diverge.

That's the kind of thing I'm asking. If it's worse than some `random' (oh, not
again! ;-))
choice of the closest one at the same precision. Not if it's worse than using
the first value
of 2).

>     Fernando> There could be an `unsafe' FAST-COS (or whatever), or a
>     Fernando> switch to enable an alternative implementation
>     Fernando> (something analogous to (safety 0)).
>
> Well, you can always do this yourself, so what's the point?

The point is that not only would you know that your program runs on any compiler
implementing the spec, but also that you can write compliance tests. Maybe
FORTRAN doesn't have that because everybody is supposed to know what they are
doing, but what about Ada? [Doesn't this kind of thing make the Ada people
shiver?]
From: Ray Blaak
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <m37lng8yhc.fsf@vault82.infomatch.bc.ca>
Fernando Mato Mira <········@iname.com> writes:
> The point is that not only would you know that your program runs on any
> compiler implementing the spec, but also that you can write compliance
> tests. Maybe FORTRAN doesn't have that because everybody is supposed to know
> what they are doing, but what about Ada? [Doesn't this kind of thing make the
> Ada people shiver?]

The appendix G of the Ada standard lays it all out in picky detail. Check out:

http://www.adahome.com/rm95/rm9x-G.html

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@infomatch.com                            The Rhythm has my soul.
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87r9ls4a0r.fsf@orion.dent.isdn.cs.tu-berlin.de>
Stig Hemmer <····@pvv.ntnu.no> writes:

> Recommended reading: The chapter on random numbers in Donald E. Knuths
> book "The Art of Computer Programming"

While this is a classic on RNGs, I would recommend reading some of
the newer overview papers and reports on RNGs after TAOCP.  There has
been much development in the world of RNGs since TAOCP, and anyone
seriously using RNGs should be aware of that.  Sadly, there exists a
huge gap between theory and practice in this area.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379F6403.455530AA@iname.com>
Raymond Toy wrote:

> >>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:
>
>     Fernando> But doesn't an optimal approximation to COS at a given
>     Fernando> precision `exist'? Can there be an optimal `RANDOM'?
>
> The existence of an optimal approximation doesn't mean the
> implementation actually does this or is even willing to do this.  And
> optimal needs to be defined, so there can be an "optimal" RANDOM, for
> an appropriately chosen definition of optimal.

But a clueless user or even a naive non-expert as myself with his fair
share of numerical analysis courses during his college days might not be
too unreasonable to expect COS to be `precise first, fast second' (I
actually never thought that an implementation might go the other way
before).
A couple of times I've done `man random' just to check the arguments and
obviated all the discussion about the method used except for the line
where it might point out to a better one (which I would usually adopt
just like that). Those were for pretty simple uses, but it might have
happened the same is I was implementing some NN learning using Montecarlo
simulation. And maybe the particular test sets would have passed so you
label it as `OK'. It was not until this year when I was looking for some
random number generator that I could use for network initialization in a
CSMA/CD fashion in an embedded controller for some cheap appliances that
I saw the problems.
[Maybe I read something about the importance of RNGs for Montecarlo
before, but I don't remember].
It seems pretty clear that you can go through a curriculum which is 50%
math and 50% CS, where you get banged in the head enough about
approximation errors, but not about distribution problems (maybe because
the basic statistics, operations research, and numerical analysis courses
each have their own focus).

It's also possible to overlook the issue when changing, or even worse,
upgrading, compilers. Having the user define RANDOM on his own to be
RANDOM-WHATEVER, so that he can assume that it should continue working
the same on a different compiler,
he is forced to think if it's not there, or that he can be sure it won't
get pulled under his feet in the next release seems more sound to me.

I meant `optimal' in the sense that it works for all programs that manage
to run with some appropriately chosen pseudorandom function.

But maybe everybody will have real hardware-based RANDOM in the near
future, and then the story will reverse..
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4noggwv5yc.fsf@rtp.ericsson.se>
>>>>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:

    Fernando> too unreasonable to expect COS to be `precise first,
    Fernando> fast second' (I actually never thought that an
    Fernando> implementation might go the other way before).

I think most implementations of COS are much better now than before,
but sometimes "easy" was preferred over "precise" or "fast".  If you
read some of Kahan's articles you see that "fast" is probably more
important than anything else, even for simple things like a*b + c
which might use a multiply-accumulate instruction instead of a
multiply and then a seperate add.

    Fernando> I meant `optimal' in the sense that it works for all
    Fernando> programs that manage to run with some appropriately
    Fernando> chosen pseudorandom function.

In that case, I suspect even COS would fail your test.

    Fernando> But maybe everybody will have real hardware-based RANDOM
    Fernando> in the near future, and then the story will reverse..

I thought you could already buy such things.  Some hardware with
radioactive source and detector that attaches to a serial port or
printer port.

Ray
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87oggw49v4.fsf@orion.dent.isdn.cs.tu-berlin.de>
Raymond Toy <···@rtp.ericsson.se> writes:

> I thought you could already buy such things.  Some hardware with
> radioactive source and detector that attaches to a serial port or
> printer port.

I did once hook up a scintilation-counter to a computer, and used this
to generate random numbers for fun.  Was part of a bigger project, and
the RNG part was only to get aquainted with the device.  

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Rob Warnock
Subject: physical randomness  [was: Re: CMU CL vs. CLISP? ]
Date: 
Message-ID: <7nog8b$6v5fh@fido.engr.sgi.com>
Pierre R. Mai <····@acm.org> wrote:
+---------------
| I did once hook up a scintilation-counter to a computer, and used this
| to generate random numbers for fun.
+---------------

Well, for something *completely* different, try http://lavarand.sgi.com/
(or http://lavarand.sgi.com/cgi-bin/how.cgi to see how it works -- also
hands you a new 128-byte random number each time you reload it!).


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <37A01222.7C9FF40D@iname.com>
Raymond Toy wrote:

>     Fernando> But maybe everybody will have real hardware-based RANDOM
>     Fernando> in the near future, and then the story will reverse..
>
> I thought you could already buy such things.  Some hardware with
> radioactive source and detector that attaches to a serial port or
> printer port.

I really meant everybody, ie, would something like the Intel thing
qualify?
[Not really the point, but can you imagine selling some software that says
"Radioactive source not included"? Well, I guess it would fare much better
than "Radioactive source included", or then, maybe not ;-) [What?? M$
already sells something like that? ;->]
Hm.. Or maybe I already got lots of those (besides "everything").Should
make for a good internet/ebay scam: RADIOACTIVE STUFF FOR SALE ;-)]
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <874sio5s2b.fsf@orion.dent.isdn.cs.tu-berlin.de>
Fernando Mato Mira <········@iname.com> writes:

> But a clueless user or even a naive non-expert as myself with his fair
> share of numerical analysis courses during his college days might not be
> too unreasonable to expect COS to be `precise first, fast second' (I
> actually never thought that an implementation might go the other way
> before).

You can't trust your expectations.  If you _really_ care about the
details of _any_ numerical function/operator, you have to go out and
check it, in each and every implementation and each and every
version.  Nothing else will do!

That being said, most people don't have to care about it in that much
detail, if they take a few precautionary steps.

That is still no reason not to include a fast, high-quality RNG as the
default, instead of the usual 08/15 stuff.  Especially the usual LCGs
you find in many implementations of many languages, are not a good
choice for use as the standard RNG, since non-weary users are much
more likely to introduce RNG artifacts into their programs with a LCG
than with most modern RNGs. (Keywords here are the lattice-structure
of LCGs, and the quite small period, where empirical evidence suggests
that sqrt(P) or less samples should be used.  With a period of ~2^32,
many RNGs will only reliably give you around 65000 samples.  That's
not much nowadays).

> It's also possible to overlook the issue when changing, or even worse,
> upgrading, compilers. Having the user define RANDOM on his own to be
> RANDOM-WHATEVER, so that he can assume that it should continue working
> the same on a different compiler,
> he is forced to think if it's not there, or that he can be sure it won't
> get pulled under his feet in the next release seems more sound to me.

If we take this reasoning to the end, this would imply that RANDOM
should not be part of the standard, or the standard would have to
specify a particular algorithm.  Both are impractical.  You can't
prescribe an algorithm, since this hampers progress, and can give you
big trouble should an important defect be found in the algorithm you
specified.  So this would only leave the option of excluding RANDOM
from the language.  But this would only encourage J. Random Loser to
implement an RNG himself, which usually leads to pretty disastrous
results.  So to protect J. Random Loser from the worst effects of his
own ignorance, I'd claim that we should convince language implementors 
to use high-quality RNGs for the default RANDOM.  High-quality RNGs
which are known to have few defects and pitfalls.

Let me make this clear again:  RANDOM is in the language to protect
and help J. Random Loser, and not for the serious user of RNGs.

> I meant `optimal' in the sense that it works for all programs that manage
> to run with some appropriately chosen pseudorandom function.

Modern RNGs like MT-19937 (or the many others that exist today) are very
versatile.  They work for nearly all non-specialized applications.  You
still have to have a number of other RNGs at hand to test for artifacts,
but for most normal uses, you can get by with one RNG nowadays.

> But maybe everybody will have real hardware-based RANDOM in the near
> future, and then the story will reverse..

A hardware-based RANDOM is even less optimal than a modern RNG.  Most
software applications of RNGs require repeatability, which means you
must be able to regenerate the stream of "random" numbers.  With real
random numbers, you'd have to record the stream, which can get pretty
costly when the numbers get big (which they do pretty quickly in many
fields).

The (software) field which would benefit the most from real random
numbers would probably be cryptography and security.  Their uses of
random numbers differ pretty much from most other uses of random
numbers.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <P8Nn3.59470$AU3.1501259@news2.giganews.com>
On Wed, 28 Jul 1999 17:28:08 +0200, Fernando Mato Mira
<········@iname.com> wrote: 
>Raymond Toy wrote:
>> "Fernando" == Fernando Mato Mira <········@iname.com> writes:
>>> RNG might work in a case where a faster one doesn't, and the idea
>>> of having 1 canonical function called `RANDOM' is pretty dangerous
>>> in the hands of the noninitiated as evidenced by the issue that
>>> triggered this discussion.
>>
>> Why is one canonical RANDOM bad?  No one seems to complain there's
>> just one canonical function "COS".
>
>But doesn't an optimal approximation to COS at a given precision
>`exist'? Can there be an optimal `RANDOM'?

The "optimal" approximation to the cosine function is the value that
is the closest to the actual value for the particular argument.

If a value is incorrect, that is readily evaluated.

The same is not true for random number generators; it is possible for
different processes that use RNGs to value varying properties.  

The "traditional" RNGs use linear congruential generator functions.

(setf seed 12455)
(defconstant modulus 32767)
(defconstant multiplier 1103515245)
(defconstant adder 12345)
(defun randvalue ()
   (setf seed (modulo (+ (* seed multiplier) adder) modulus))
   seed)

(No claims made here of how wonderful these parameters are!)

This is dead simple to implement; unfortunately it's not Terribly
Random, and some applications may expose problems with this.

In recent days, RNG schemes that are based on the use of cryptographic
functions are espoused for some purposes.

Other RNG schemes that, despite only providing values that are fairly
small (e.g. - 32 bits), but yet have periods between repetitions of
values that are *vastly* larger than 2^32 have become fairly popular.

[What I would really like to see, coded in some Lisp variant, is the
algorithm that Knuth presents in the Stanford Graphbase.  It's fairly
32-bit-oriented, which is somewhat unfortunate...]
-- 
Rules of the Evil Overlord #19. "The hero is not entitled to a last
kiss, a last cigarette, or any other form of last request." 
········@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <86aesfcjq6.fsf@g.local>
Christopher Browne wrote:

> [What I would really like to see, coded in some Lisp variant, is the
> algorithm that Knuth presents in the Stanford Graphbase.  It's fairly
> 32-bit-oriented, which is somewhat unfortunate...]

It's a nice algorithm, but not spectacularly great. I forget what
tests it fails; have a look at Marsaglia's papers. Knuth's generator
is there called a "lagged Fibonacci" RNG, I think. But it is
extremely fast, and produces good enough results for many
applications.

(I forget the details of the code in the GraphBase; it may in fact
be a minor variation on the lagged-Fibonacci RNG. Unless my brain
is malfunctioning, though, it's at most a minor variation on it.)

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <606o3.60426$AU3.1550246@news2.giganews.com>
On 29 Jul 1999 20:57:21 +0100, Gareth McCaughan
<················@pobox.com> wrote: 
>Christopher Browne wrote:
>
>> [What I would really like to see, coded in some Lisp variant, is the
>> algorithm that Knuth presents in the Stanford Graphbase.  It's fairly
>> 32-bit-oriented, which is somewhat unfortunate...]
>
>It's a nice algorithm, but not spectacularly great. I forget what
>tests it fails; have a look at Marsaglia's papers. Knuth's generator
>is there called a "lagged Fibonacci" RNG, I think. But it is
>extremely fast, and produces good enough results for many
>applications.

I guess my agenda may not have been sufficiently obvious (I didn't
clearly imply it); one of the points to having the *identical* RNG
would be that this would permit constructing a "Lisp Graphbase,"
thereby providing a significant set of graph algorithms that could be
provably established to work the same as Knuth's examples.

-- 
"It goes against the grain of modern education to teach children to
program. What fun is there in making plans, acquiring discipline in
organizing thoughts, devoting attention to detail and learning to be
self-critical?" -- Alan Perlis
········@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <86aesd8vtu.fsf@g.local>
Christopher Browne wrote:

> I guess my agenda may not have been sufficiently obvious (I didn't
> clearly imply it); one of the points to having the *identical* RNG
> would be that this would permit constructing a "Lisp Graphbase,"
> thereby providing a significant set of graph algorithms that could be
> provably established to work the same as Knuth's examples.

Oh, I see. Here's a really bad implementation. The only testing
I've done is to call the test function and observe that it returns
the right thing.

(defpackage "GB_FLIP"
  (:use "CL")
  (:export gb-next-rand gb-init-rand gb-unif-rand test-flip))
(in-package "GB_FLIP")

(defconstant *A* (make-array 55 :element-type '(integer 0 2147483647)
                                :initial-element 0)
  "The array containing random numbers for GB-NEXT-RAND to return.")

(defvar *n* 1
  "The array index of the next random number for GB-NEXT-RAND to return, plus 1.")

(defun gb-next-rand ()
  "Return a pseudo-random 31-bit number."
  (if (zerop *n*)
    (gb-flip-cycle)
    (aref *A* (decf *n*))))

(defun gb-flip-cycle ()
  "Make a new batch of 55 random numbers."
  (loop for ii upfrom 0
        for jj from 31 to 54 do
    (setf (aref *A* ii)
          (logand (- (aref *A* ii) (aref *A* jj))
                  #x7FFFFFFF)))
  (loop for ii from 24 to 54
        for jj upfrom 0      do
    (setf (aref *A* ii)
          (logand (- (aref *A* ii) (aref *A* jj))
                  #x7FFFFFFF)))
  (setf *n* 54)
  (aref *A* 54))    

(defun gb-init-rand (seed)
  "Set up for calls to GB-NEXT-RAND."
  (setf seed (logand seed #x7FFFFFFF))
  (let ((prev seed)
        (next 1))
    (setf (aref *A* 54) prev)
    (loop for i = 20 then (mod (+ i 21) 55)
          until (= i 54)
          do
      (setf (aref *A* i) next
            next (logand (- prev next) #x7FFFFFFF)
            seed (if (oddp seed)
                   (+ (ash seed -1) #x40000000)
                   (ash seed -1))
            next (logand (- next seed) #x7FFFFFFF)
            prev (aref *A* i))))
  (loop repeat 5 do (gb-flip-cycle)))

(defun gb-unif-rand (m)
  (let ((limit (- #x80000000 (mod #x80000000 m))))
    (loop for r = (gb-next-rand)
          until (< r limit)
          finally (return (mod r m)))))

(defun test-flip ()
  (gb-init-rand -314159)
  (assert (= (gb-next-rand) 119318998)
          ()
          "Failure on the first try!")
  (loop repeat 133 do (gb-next-rand))
  (assert (= (gb-unif-rand #x55555555) 748103812)
          ()
          "Failure on the second try!")
  'OK)

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4n1zdqut6r.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher Browne <········@news.hex.net> writes:


    Christopher> The "optimal" approximation to the cosine function is the value that
    Christopher> is the closest to the actual value for the particular argument.

Closest in what sense?  You need to define that before saying the
approximation is optimal.

Granted, nowadays, I believe approximations for special functions are
almost always minimax approximations:  the maximum error is as small
as possible over a subset of the domain of the function.  Then
properties of the function are used to get the values over the rest of 
the domain.  And in that region, it's not clear that this
approximation is still "optimal".  

Ray
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87u2qo4a65.fsf@orion.dent.isdn.cs.tu-berlin.de>
Fernando Mato Mira <········@iname.com> writes:

> Nothing of this is obvious from the above. But the main point is

??? Do you claim I have insights into Raymond's mind, that you do not
possess?  Or what do you claim?  I simply restated what was already
contained in Raymond's and my earlier posts.  No magic PSI-factor
involved.

> that a slower RNG might work in a case where a faster one doesn't,
> and the idea of having 1 canonical function called `RANDOM' is
> pretty dangerous in the hands of the noninitiated as evidenced by
> the issue that triggered this discussion.

I think I've made it absolutely clear in my original post, that comparing
RNGs is non-trivial.  The whole point of this was to show the absurdity of
simple benchmarks.  W.r.t. to the non-initiated and RANDOM, see my other
post.  And w.r.t. to "the main point": Speed and quality of RNGs aren't
that highly correlated, as can be witnessed by the fact that MT-19937 is a
very high speed RNG, yet still is one of the most successful RNGs when it
comes to theoretical and statistical tests[1].  So what is your main point
worth?  It doesn't really apply to Raymond's post, since his modification
was not to the RNG itself, and in any case is backed by theory.  And if
you want to claim that CLISP or ACL's RNGs are slower because they are
"better", I'd like to see evidence for this.  Have you actually looked up
the test results for the RNG algorithms in question?

Regs, Pierre.


Footnotes: 
[1]  And MT-19937 is not the only fast, high-quality RNG.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nzp0gvpb3.fsf@rtp.ericsson.se>
>>>>> "Pierre" == Pierre R Mai <····@acm.org> writes:

    Pierre> stands, you gain nor lose nothing here.  But given that
    Pierre> the least significand bits of MT-19937 have been examined
    Pierre> rigorously, and have exhibited equally good results for
    Pierre> the usual tests than other bits, it seems theoretically

I looked briefly at the paper.  They give results for the groups of
bits starting from the most significant.  The results say that the
most significant bit is extremely random and all 32-bits are also very 
random.  However, they don't include any results for the least
significant bits, but do mention in passing that the least 6 bits are
2000-some-equidistributed, which means it's very random, I think.

Ray
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <871zds5rnq.fsf@orion.dent.isdn.cs.tu-berlin.de>
Raymond Toy <···@rtp.ericsson.se> writes:

> >>>>> "Pierre" == Pierre R Mai <····@acm.org> writes:
> 
>     Pierre> stands, you gain nor lose nothing here.  But given that
>     Pierre> the least significand bits of MT-19937 have been examined
>     Pierre> rigorously, and have exhibited equally good results for
>     Pierre> the usual tests than other bits, it seems theoretically
> 
> I looked briefly at the paper.  They give results for the groups of
> bits starting from the most significant.  The results say that the
> most significant bit is extremely random and all 32-bits are also very 
> random.  However, they don't include any results for the least
> significant bits, but do mention in passing that the least 6 bits are
> 2000-some-equidistributed, which means it's very random, I think.

Although he only gives the exact numbers for the least 6 bits in his
paper, tests have also been done (as is usual) on shorter runs of
least-significant bits, without obvious problems.  I can't quite
remember in which paper I read a more detailed analysis.

Matsumoto himself recommends simply concatenating successive 32-bits
samples to get longer sequences.  Hellekallek's team also did tests on
MT-19937 (their usual pLab reports, but also other work IIRC), and
reported favourably on MT-19937's performance, though I'd have to dig
up their report to get the details on this.

Although MT-19937 is fairly new (~ 2 years), it is based on TT-800,
which is a couple of years older, and which has been tested very
favourably before.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933114886.517012@fire-int>
Pierre R. Mai (····@acm.org) wrote:
: ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > And indeed I`m not concerned principally with bignum speed.  I am more
: > concerned about the apparent lack of care that some implementors put
: > in implementing such basic functions as RANDOM.  Please try my example
: > on ACL.

: Beware!  Benchmarking RANDOM is a non-trivial task, and timing
: (random (expt 10 500)) is just brain-damaged, unless you can convince
: me that this happens to be on the critical path of any
: non-brain-damaged program (which I would find hard to believe).  

I have not found this bug by trying functions at random with random
arguments.  It happened in a real program, and it was in the critical
path. Even if you find it hard to believe.


: If you want to benchmark RANDOM, first understand what you are
: benchmarking.  

I, at least, understand that if (random (expt 10 500)) signals a
condition instead of returning a number, something must be broken.

: Sadly, most implementations of any language deem
: it enough to provide a simplistic (and sometimes even seriously
: flawed) RNG, and not even documenting the exact algorithm and
: parameters used, thereby forcing any serious user to implement
: his own RNG anyway.  Comparing the performance of a flawed or
: severely restricted RNG to that of a high-quality one is not in
: any way meaningful (although there exist many high-quality RNGs
: out there which can be quite competitive to your usual crappy
: RNG).[1]

: Then use realistic examples.  Where would the range argument to the
: RNG be calculated afresh every time?  I can't think of any reasonable
: case.

A reasonable case occurs when doing primality tests of big integers.
But anyway this is irrelevant.

: And finally you have to discern the different cases.  Whilst ACL's
: random implementation can be a bit slow in the general case,

It not a question of speed.

P.
From: Tim Bradshaw
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <nkj4siqs839.fsf@tfeb.org>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> And indeed I`m not concerned principally with bignum speed.  I am more
> concerned about the apparent lack of care that some implementors put
> in implementing such basic functions as RANDOM.  Please try my example
> on ACL.
> 

Have you submitted a bug report to Franz, if it is buggy, or are you
just flaming on newsgroups in the hope that they'll somehow hear you?

--tim
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933112748.73829@fire-int>
Tim Bradshaw (···@tfeb.org) wrote:
: ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > And indeed I`m not concerned principally with bignum speed.  I am more
: > concerned about the apparent lack of care that some implementors put
: > in implementing such basic functions as RANDOM.  Please try my example
: > on ACL.

: Have you submitted a bug report to Franz, if it is buggy, or are you
: just flaming on newsgroups in the hope that they'll somehow hear you?

*I* am flaming?   What about you?


Reporting bugs to Franz has not worked for me in the past.  

Surely I hope that they fix this.  If reporting bugs in ACL in this
newsgroup makes Franz fix them, I'll report here any new bug that I find.

P.
From: Duane Rettig
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4btcxk4qd.fsf@beta.franz.com>
I have been following this thread with interest; Rest assured I consider
a failure of (random (expt 10 500)) to produce results to be a bug in
Allegro CL, and I have even been considering the Mersenne Twister as a
potential algorithm for a fast, highly random generator for Allegro CL.
However, I must answer this post, since it calls into question the
integrity of Franz's support.

········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> Tim Bradshaw (···@tfeb.org) wrote:
> : ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:
> 
> : > And indeed I`m not concerned principally with bignum speed.  I am more
> : > concerned about the apparent lack of care that some implementors put
> : > in implementing such basic functions as RANDOM.  Please try my example
> : > on ACL.
> 
> : Have you submitted a bug report to Franz, if it is buggy, or are you
> : just flaming on newsgroups in the hope that they'll somehow hear you?
> 
> *I* am flaming?   What about you?
> 
> 
> Reporting bugs to Franz has not worked for me in the past.  

 Though we encourage people who use our unsupported products to report
problems, bugs, and anomalies to us, we never give any promise of
support for these unsupported products.  We appreciated the three reports
that you sent in to us in early 1997, and at least one of them resulted in
a fix to the linux product.

> Surely I hope that they fix this.  If reporting bugs in ACL in this
> newsgroup makes Franz fix them, I'll report here any new bug that I find.

There are two things that you need in order to guarantee that bugs get fixed
in Allegro CL:

  1. An avenue to let us know that the bug exists.  We read a few newsgroups
and sometimes grab bugs or potential bugs from them, and try to work them
in to the fabric of the lisp.  However, we still prefer that you at least
send a report to ····@franz.com, so that we are sure not to miss anything.
Also, the distribution for such discussion may be a little too wide for
this newsgroup.  I am not afraid to admit that our product is not perfect
(yet :-), but I would not like to see those lispers who are not users of
Allegro CL to necessarily be bothered by complaints about our product.

 2. Priority.  We attach highest priority to ensuring the success of our
supported customers.  We are also obviously interested in continuing
to improve our product in general, though if this general goal takes time
and resources away from helping our customers to be successful, then it
gets lower priority.

Once a supported customer has reported a problem, and we have determined
that it is a bug,  we run through a standard process of determining what
the priority is of their bug.  This includes the obvious: we ask them how
important it is to fix it right away.  It also includes the possibilities
that either the customer will work around the problem, (sometimes we supply
the workaround) or that we will supply a patch.  Again, we usually ask our
customers what their preference is.  And finally, we log a "bug" report, a
database item that gets assigned priority for looking at or fixing in a
future release.  Non-bugs, but features that are highly desireable, are
logged as "rfe" reports (requests for enhancement).  We tend to look at
all of these reports often (we get a list of the top priority bugs and rfes
daily) and try to work them into our schedules according to their priorities.


Finally, regarding your earlier response:

> : > I am more
> : > concerned about the apparent lack of care that some implementors put
> : > in implementing such basic functions as RANDOM.

Make no mistake about it; we care.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933200340.641678@fire-int>
Duane Rettig (·····@franz.com) wrote:
: ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:
: > Tim Bradshaw (···@tfeb.org) wrote:
: > : ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > Reporting bugs to Franz has not worked for me in the past.  

:  Though we encourage people who use our unsupported products to report
: problems, bugs, and anomalies to us, we never give any promise of
: support for these unsupported products. 

Of course.

: > Surely I hope that they fix this.  If reporting bugs in ACL in this
: > newsgroup makes Franz fix them, I'll report here any new bug that I find.

: There are two things that you need in order to guarantee that bugs get fixed
: in Allegro CL:

:   1. An avenue to let us know that the bug exists.  We read a few newsgroups
: and sometimes grab bugs or potential bugs from them, and try to work them
: in to the fabric of the lisp.  However, we still prefer that you at least
: send a report to ····@franz.com, so that we are sure not to miss anything.
: Also, the distribution for such discussion may be a little too wide for
: this newsgroup.

Three bug reports have not even produced an acknowledgement of having
received the mails.  What you say in this usenet article is the first
sign I have that you have effectively received these mails.  Do you
find reasonable for people to keep sending bug reports to what appears
to be a black hole?

: I am not afraid to admit that our product is not perfect
: (yet :-), but I would not like to see those lispers who are not users of
: Allegro CL to necessarily be bothered by complaints about our product.

I think that lispers, whether or not users of Allegro, are very
interested in discussing flaws, defects and strong points of the
available lisp implementations. 

Best regards,
Pierpaolo Bernardi
From: Duane Rettig
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4wvvkp8dm.fsf@beta.franz.com>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> Duane Rettig (·····@franz.com) wrote:
> : ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:
> : > Surely I hope that they fix this.  If reporting bugs in ACL in this
> : > newsgroup makes Franz fix them, I'll report here any new bug that I find.
> 
> : There are two things that you need in order to guarantee that bugs get fixed
> : in Allegro CL:
> 
> :   1. An avenue to let us know that the bug exists.  We read a few newsgroups
> : and sometimes grab bugs or potential bugs from them, and try to work them
> : in to the fabric of the lisp.  However, we still prefer that you at least
> : send a report to ····@franz.com, so that we are sure not to miss anything.
> : Also, the distribution for such discussion may be a little too wide for
> : this newsgroup.
> 
> Three bug reports have not even produced an acknowledgement of having
> received the mails.  What you say in this usenet article is the first
> sign I have that you have effectively received these mails.  Do you
> find reasonable for people to keep sending bug reports to what appears
> to be a black hole?

No, not at all.  This has troubled me since I first looked into it
yesterday, and although I won't make any policy statements here,
I can say that we are actively discussing solutions to removing
this "black hole" perception.

> : I am not afraid to admit that our product is not perfect
> : (yet :-), but I would not like to see those lispers who are not users of
> : Allegro CL to necessarily be bothered by complaints about our product.
> 
> I think that lispers, whether or not users of Allegro, are very
> interested in discussing flaws, defects and strong points of the
> available lisp implementations. 

Well, I'll leave it to users on the net to respond to this question;
as a vendor, I honestly don't know.  However, the problem you are
describing is only present for non-supported users - we are relatively
new to the concept of supporting non-support :-)  On the other hand,
a recent survey of our _supported_ customers showed a high satisfaction
rate for our technical support for that group.  For such matters, the
direct, personal touch is much more adequate than a usenet discussion.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Christopher B. Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <slrn7pvqsq.a7f.cbbrowne@knuth.brownes.org>
On 28 Jul 1999 18:11:49 -0700, Duane Rettig <·····@franz.com> posted: 
>········@cli.di.unipi.it (Pierpaolo Bernardi) writes:
>> Three bug reports have not even produced an acknowledgement of having
>> received the mails.  What you say in this usenet article is the first
>> sign I have that you have effectively received these mails.  Do you
>> find reasonable for people to keep sending bug reports to what appears
>> to be a black hole?
>
>No, not at all.  This has troubled me since I first looked into it
>yesterday, and although I won't make any policy statements here,
>I can say that we are actively discussing solutions to removing
>this "black hole" perception.

A Truly Confident organization can expose its bug list in the
confidence that since they're actually working to resolve the
problems, the revelation of the Few Remaining Blemishes will not work
to their detriment.

Nay, it works to their favor since customers and would-be customers
appreciate honesty, and will consider that the fact that the Dastardly
Other Guys don't do the same means they must have a Whole Lot Of Bugs
To Hide.

How seriously you *should* take the above is in the eye of the
beholder.  I've put it in a less-than-serious fashion so that only
those completely lacking in ability to read irony can take it as a
definitive doctrine.

There are, nonetheless, examples of it:

<http://master.debian.org/~wakkerma/bugs/> indicates the number of
"critical" bugs outstanding in the "development fork" of the Debian
Linux distribution.  It actually presents a graph of bug counts over
time to show growth/decline thereof.

<http://egcs.cygnus.com/testresults/> provides an analagous report of
tests and test results for GCC.

It probably is *not* true, in general, in the computer industry, that
customers spend money on product as a result of being able to
"appreciate honesty" as suggested above; it probably *should* be
true...

-- 
Rules of the Evil Overlord #27. "I will hire a talented fashion
designer to create original uniforms for my legions of terror, as
opposed to some cheap knock-offs that make them look like Nazi
stormtroopers, Roman footsoldiers, or savage Mongol hordes. All were
eventually defeated and I want my troops to have a more positive
mind-set." 
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3142227161327836@naggum.no>
* Christopher B. Browne
| A Truly Confident organization can expose its bug list in the confidence
| that since they're actually working to resolve the problems, the
| revelation of the Few Remaining Blemishes will not work to their
| detriment.

  Franz Inc's documentation for ACL 4.3 contained a known bugs list.
  unfortunately, keeping a bug list up to date is a lot of work, and lots
  of bugs had actually been fixed, so you might say the bug list had bugs.
  it has been removed for ACL 5.0.

| Nay, it works to their favor since customers and would-be customers
| appreciate honesty, and will consider that the fact that the Dastardly
| Other Guys don't do the same means they must have a Whole Lot Of Bugs To
| Hide.

  well, whether you're a bad or good guy, your view of other people tends
  to be that they have to present some evidence to you that they aren't
  like you.  this sometimes causes some very interesting interactions
  between good guys and bad.

| It probably is *not* true, in general, in the computer industry, that
| customers spend money on product as a result of being able to "appreciate
| honesty" as suggested above; it probably *should* be true...

  well, honesty has a serious risk that the computer industry has larger
  problems dealing with than more stable industries.  suppose you say, "no,
  we don't support Windows 2000", and this once-honest statement turns
  false, how much would it cost to reach all the people who believed you
  the first time.  suppose you say, "yes, we are working on support for
  Windows 2000", and this is a lie, how would anyone know unless someone
  decided that total honesty was a good thing?  when Bill Gates lied with a
  straight face that he had a BASIC for the Altair, which both he and the
  designer of Altair knew was a lie, he won enough time to actually get
  there in time.  since people tend to punish dishonesty only when it is
  discovered and reward honesty only when it benefits the recipient of
  honest news more than the sender, being too honest is self-destructive.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Mark Carroll
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3Oi*h1a6n@news.chiark.greenend.org.uk>
In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
>* Christopher B. Browne
>| A Truly Confident organization can expose its bug list in the confidence
>| that since they're actually working to resolve the problems, the
>| revelation of the Few Remaining Blemishes will not work to their
>| detriment.
>
>  Franz Inc's documentation for ACL 4.3 contained a known bugs list.
>  unfortunately, keeping a bug list up to date is a lot of work, and lots
>  of bugs had actually been fixed, so you might say the bug list had bugs.
>  it has been removed for ACL 5.0.
(snip)

I would have thought that continuing accurate maintenace of an
organised known bugs list would be pretty much a prerequisite for
anyone managing 'current' software, especially if it's not just used
in-house. The hope would be that the work involved for Franz in
publishing such a list for ACL 5.x would be mostly in concatenating
and presenting it, rather than generating it in the first place!

-- Mark
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3142237920350364@naggum.no>
* Mark Carroll <·····@chiark.greenend.org.uk>
| I would have thought that continuing accurate maintenace of an organised
| known bugs list would be pretty much a prerequisite for anyone managing
| 'current' software, especially if it's not just used in-house.

  maintaining a database for internal use is quite different from making it
  useful to the outside world.

| The hope would be that the work involved for Franz in publishing such a
| list for ACL 5.x would be mostly in concatenating and presenting it,
| rather than generating it in the first place!

  it seems that you assume they don't have a clue.  I find such assumptions
  to be indicative of the attitudes of the sender rather than whoever he is
  trying to put it over on.

  I know from maintaining other projects that organizing a list of bugs for
  internal use requires only a modicum of orderliness.  making it possible
  for outsiders to make use of this list is sometimes very, very hard.

  e.g., I recently had a problem with nested WITH-TIMEOUT forms, because
  the macro didn't generate a new tag to throw to when the timeout fired,
  so it always triggered the inner timeout-code.  this had a lot of really
  weird ramifications that obscured the actual problem, but until we found
  the bug, it was very hard even to write down coherently what was going
  on.  after the problem was found, a fix was made available to me within a
  couple minutes.  so the known bug lived for a few seconds, while the
  problem had persisted for weeks.  what should be entered in the known
  bugs list?  is it more useful to provide a patch to the problem than to
  let people work around the bug?  in my view "known bugs" lists are good
  for developers to avoid pitfalls, but when the usual case is that you fix
  the bug as soon as you find what it actually is, what's the point?  just
  get people to download patches.  (Allegro CL 5.0.1 comes with a function
  that automatically retrieves updates.)

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Mark Carroll
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <zWw*q7b6n@news.chiark.greenend.org.uk>
In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
>* Mark Carroll <·····@chiark.greenend.org.uk>
>| I would have thought that continuing accurate maintenace of an organised
>| known bugs list would be pretty much a prerequisite for anyone managing
>| 'current' software, especially if it's not just used in-house.
>
>  maintaining a database for internal use is quite different from making it
>  useful to the outside world.

Semi-useful would do; better than none at all.

>| The hope would be that the work involved for Franz in publishing such a
>| list for ACL 5.x would be mostly in concatenating and presenting it,
>| rather than generating it in the first place!
>
>  it seems that you assume they don't have a clue.  I find such assumptions
>  to be indicative of the attitudes of the sender rather than whoever he is
>  trying to put it over on.

I wasn't assuming anything of the sort.

>  I know from maintaining other projects that organizing a list of bugs for
>  internal use requires only a modicum of orderliness.  making it possible
>  for outsiders to make use of this list is sometimes very, very hard.

That's interesting - I've rarely been in a situation where the bugs I
had to document were naturally done so in a form that my in-house
colleagues could understand but few people outside could use. Hell, I
find Debian Linux's bug tracking system WWW archive extremely useful,
despite not being at all involved in Debian's development myself.

Just an archive of people's bug reports, where they don't contain
sensitive information, would be nice, so you can see if anyone else is
having the same problem as you, completely omitting
Franz' internally-generated reports.

>  e.g., I recently had a problem with nested WITH-TIMEOUT forms, because
>  the macro didn't generate a new tag to throw to when the timeout fired,
>  so it always triggered the inner timeout-code.  this had a lot of really
>  weird ramifications that obscured the actual problem, but until we found
>  the bug, it was very hard even to write down coherently what was going
>  on.  after the problem was found, a fix was made available to me within a
>  couple minutes.  so the known bug lived for a few seconds, while the
>  problem had persisted for weeks.  what should be entered in the known
>  bugs list?  is it more useful to provide a patch to the problem than to
>  let people work around the bug?  in my view "known bugs" lists are good
>  for developers to avoid pitfalls, but when the usual case is that you fix
>  the bug as soon as you find what it actually is, what's the point?  just
>  get people to download patches.  (Allegro CL 5.0.1 comes with a function
>  that automatically retrieves updates.)

If it's possible to give people an idea of "if you're using this or
this and get these error messages, it's not actually your fault -
patches are coming soon", or whatever, then that's often actually
pretty useful information IME - it can be a pain debugging Surprising
Behaviour; and once the bug is known to _you_, you may want to check
other code that may have been buggy without you noticing, to make sure
it hasn't been screwing something up. Also, it's not always
immediately easy to patch things usefully - e.g. if the guy who has
write access to the installation is visiting Chad, or you're using an
basic application environment lisp image shipped from someone else.

-- Mark
From: Christopher B. Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <slrn7q0k4v.c39.cbbrowne@knuth.brownes.org>
On 29 Jul 1999 12:03:59 +0100 (BST), Mark Carroll
<·····@chiark.greenend.org.uk> posted: 
>In article <················@naggum.no>, Erik Naggum  <····@naggum.no> wrote:
>>* Christopher B. Browne
>>| A Truly Confident organization can expose its bug list in the confidence
>>| that since they're actually working to resolve the problems, the
>>| revelation of the Few Remaining Blemishes will not work to their
>>| detriment.
>>
>>  Franz Inc's documentation for ACL 4.3 contained a known bugs list.
>>  unfortunately, keeping a bug list up to date is a lot of work, and lots
>>  of bugs had actually been fixed, so you might say the bug list had bugs.
>>  it has been removed for ACL 5.0.
>(snip)
>
>I would have thought that continuing accurate maintenace of an
>organised known bugs list would be pretty much a prerequisite for
>anyone managing 'current' software, especially if it's not just used
>in-house. The hope would be that the work involved for Franz in
>publishing such a list for ACL 5.x would be mostly in concatenating
>and presenting it, rather than generating it in the first place!

They might wrinkle their noses at the use of Perl code, but the use of
something like Bugzilla would allow this process to be automated.  Or
PIKT, or GNATS, or ...

The "politically correct" approach would be to deploy a CL-based
equivalent; surely the increased productivity of CL would make it
easier to produce this than the Perl-based Bugzilla, right?
-- 
"Let's start preparing for the future.  Now's a good time, since it's
already here."  -- David L. Andre
········@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87vhb3377v.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@news.brownes.org (Christopher B. Browne) writes:

> They might wrinkle their noses at the use of Perl code, but the use of
> something like Bugzilla would allow this process to be automated.  Or
> PIKT, or GNATS, or ...
> 
> The "politically correct" approach would be to deploy a CL-based
> equivalent; surely the increased productivity of CL would make it
> easier to produce this than the Perl-based Bugzilla, right?

The problem with maintaining a public bug database (as with any kind
of database) is not a problem of software.  It is a problem of data
accuracy and consistency.  Have you taken a look at the Debian Bug
database?  It is full of duplicate, inaccurate, incomprehensible and
out-dated bug reports.  Although some efforts have been made to
increase the quality of the contents of that database, and although
each developer spends some time dealing with the database, the
database is still more of a tool for developers than for the general
public.

And this isn't to knock the Debian effort.  It just illustrates the
non-trivial cost and effort involved in keeping such a database useful
and accurate, especially to the non-initiated public.  Where developers
can often make sense of the information as it is, because of in-depth
knowledge of the software in question, and it's history, non-developers
will often not have the required knowledge to do this.  So keeping a
database useful to the public requires a non-trivial investment.  It
usually seems more useful to keep the database private, and give
outsiders "mediated" access to it.

And this doesn't have anything to do with keeping your dirty laundry
private.  The same arguments often apply within companies, where your
internal clients are often better off with mediated access than with a
simple dump of your internal database to the intra-net.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Stig Hemmer
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ekviu73ljfn.fsf@verden.pvv.ntnu.no>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:
> Three bug reports have not even produced an acknowledgement of having
> received the mails.  What you say in this usenet article is the first
> sign I have that you have effectively received these mails.  Do you
> find reasonable for people to keep sending bug reports to what appears
> to be a black hole?

Another data point:

I occationally use the Linux Trial Edition of ACL 5.0.

I stumbled across a minor non-conformance.  I reported it and recieved
a quick and courteous reply.

Very good.

Stig Hemmer,
Jack of a Few Trades.
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4ng127vak4.fsf@rtp.ericsson.se>
>>>>> "Stig" == Stig Hemmer <····@pvv.ntnu.no> writes:

    Stig> ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:
    >> Three bug reports have not even produced an acknowledgement of having
    >> received the mails.  What you say in this usenet article is the first
    >> sign I have that you have effectively received these mails.  Do you
    >> find reasonable for people to keep sending bug reports to what appears
    >> to be a black hole?

    Stig> Another data point:

    Stig> I occationally use the Linux Trial Edition of ACL 5.0.

    Stig> I stumbled across a minor non-conformance.  I reported it and recieved
    Stig> a quick and courteous reply.

One more point:

A while ago I did a simple floating-point benchmark, using random,
between CMUCL and ACL 4.3 Linux.  I posted the results to the CMUCL
mailing list and that got back to Franz some how.  (The list is not
closed.)

I exchanged quite a few e-mails with Franz (Duane, if I remember) on
how to improve ACL performance.  This is where I learned about the
limitations of random on ACL.

All of this for a random non-paying user.  If I'm ever in the position 
to buy a lisp system, I'll certainly give Franz a very, very serious
look.

Ray

P.S.  I always felt vendors ought to have someone reading and posting
(non-commercial) to the newsgroups.  A great way to advertise, without
really advertising!
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3142136699904733@naggum.no>
* ········@cli.di.unipi.it (Pierpaolo Bernardi)
| Reporting bugs to Franz has not worked for me in the past.  

  how do you define "work" for reporting a bug?

| Surely I hope that they fix this.  If reporting bugs in ACL in this
| newsgroup makes Franz fix them, I'll report here any new bug that I find.

  I wish you wouldn't.  by reporting a bug to Franz Inc, you will learn
  whether it has already been reported and what the status is, you report a
  bug in the interest of having it fixed in your product, i.e., you have a
  reason the bug impacts you that is not mere frustration, and you let
  Franz Inc take part in your problems.  all of this is constructive.   by
  reporting it here, you would likely report bugs that have been fixed or
  have a known workaround, it is unlikely that the bug makes a business
  difference to you since USENET is not used to divulge business sensitive
  information, meaning that the bug would be a "disassociated bug" that it
  doesn't make any sense to provide workarounds for, you would most likely
  report the bug in a similarly hostile way to what you have done so far,
  which can only hurt Franz Inc for no reason at all, and finally, you get
  to decide what is a bug or not, and Franz Inc would have a hard time
  defending the expenditure of time and effort countering your misguided
  views of what constitutes bugs.  all of this is destructive.  you concede
  that it is, too, the way you formulate the above.

  your ardent defense of the CLISP implementation no matter what the
  criticism and your destructiveness towards other players in the Common
  Lisp market indicate that you are not driven by principle or by a desire
  to see good Common Lisp implementations, but by something else that
  ignores problems in one implementation and exaggerates problems in
  another.  a "something else" that fits is "not being of a particularly
  rational mind".  you may wish to alter your behavior so at least to give
  a different impression.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933203141.500237@fire-int>
Erik Naggum (····@naggum.no) wrote:
: * ········@cli.di.unipi.it (Pierpaolo Bernardi)
: | Reporting bugs to Franz has not worked for me in the past.  

:   how do you define "work" for reporting a bug?

At the very least, should have some detectable effect.  Like, say, a
confirmation of having received the report.

: | Surely I hope that they fix this.  If reporting bugs in ACL in this
: | newsgroup makes Franz fix them, I'll report here any new bug that I find.

:   I wish you wouldn't.  

I cannot care less of what you wish.

:   by reporting a bug to Franz Inc, you will learn
:   whether it has already been reported and what the status is, 

Done it.  Have not learned anything of what you describe.

:   you report a
:   bug in the interest of having it fixed in your product, i.e., you have a
:   reason the bug impacts you that is not mere frustration, and you let
:   Franz Inc take part in your problems.  all of this is constructive.  

Since I'm not a customer of Franz Inc, I didn't report the bugs
pretending that they fixed them for me. Naively, I thought that they
could be interested in bug reports about their prouct, whether or not
the reporting person is a paying customer.

:   by
:   reporting it here, you would likely report bugs that have been fixed or
:   have a known workaround, it is unlikely that the bug makes a business
:   difference to you since USENET is not used to divulge business sensitive
:   information, 

If I was using ACL for business, I would have buyed a licence.
Your writing does not make any sense (no news here).

[... usual naggum drivels, elided]

P.
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3142223884916796@naggum.no>
* Pierpaolo Bernardi
| At the very least, should have some detectable effect.  Like, say, a
| confirmation of having received the report.

  Franz Inc has a pretty extensive system that mails out confirmations as
  soon as someone accepts an SPR.  I don't always get such confirmations,
  but I have yet to see any evidence of malice behind that.  of course, if
  you don't _need_ evidence of malice, but assume it by default, which you
  do if I read you right, you would need evidence of its absence, and three
  instances of malice-by-default is pretty good evidence that Franz Inc is
  maliciously ignoring you.  others should know what constitutes evidence
  in your mind, however, before they base anything on your conclusions.

| I cannot care less of what you wish.

  of course you couldn't.  you post your aggressive complaints here and
  show that you're a destructive fool instead of actually trying to fix
  problems.  such attitudes naturally lead to ignoring other people's
  desires in others ways, too.  I'm sure we can look forward to even more
  destructive whining from you about products you don't use and don't want
  others to use because CLISP should have replaced all products, and when
  CLISP has a leg up on them with some particular feature, other products
  can be vilified at will with no consequences for you.  right?

  it was also a figure of speech to an irrational moron who perhaps would
  react a bit more rationally with a bit kinder words, but you're not
  reacting to what people say, but to your vilified image of them, so it
  was of course lost on you.  I should have known, but I keep holding out
  for some remaining shreds of decency in people, no matter what they have
  done.  you obviously don't, and in some case you obviously know better.

| Your writing does not make any sense (no news here).
| 
| [... usual naggum drivels, elided]

  geez.  but I'm actually happy you had to attack me personally.  having
  delineated the constructive vs destructive way to behave in the face of
  bugs, it has been very informative to learn that you fall squarely in the
  destructive camp, and it isn't just a action-reaction thing, but actually
  your personality.  thank you.

  I keep telling XEmacs people who act like rabid morons that it doesn't
  help the image of XEmacs people.  I don't like the CLISP community,
  either, because it contains a number of people who have an agenda to
  prove that CLISP is not inferior to any other product in any way.  I wish
  they could just be happy with what they have made and talk about its
  value to them instead of having to define the value in terms of their
  view of lack of value of something else.  of course, you couldn't care
  less about what I wish, but now that I have expressed such a wish, I
  expect you to act in the exact opposite way.
  
#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933332436.398442@fire-int>
Erik Naggum (····@naggum.no) wrote:
: * Pierpaolo Bernardi
: | At the very least, should have some detectable effect.  Like, say, a
: | confirmation of having received the report.

:   Franz Inc has a pretty extensive system that mails out confirmations as
:   soon as someone accepts an SPR.  I don't always get such confirmations,
:   but I have yet to see any evidence of malice behind that.  of course, if
:   you don't _need_ evidence of malice, but assume it by default,

Malice?  Erik, please stop attributing to people intentions which are
only products of your fantasy.

: | I cannot care less of what you wish.

:   of course you couldn't.  you post your aggressive complaints here and
:   show that you're a destructive fool instead of actually trying to fix
:   problems.  such attitudes naturally lead to ignoring other people's
:   desires in others ways, too.  I'm sure we can look forward to even more
:   destructive whining from you about products you don't use and don't want
:   others to use because CLISP should have replaced all products, and when
:   CLISP has a leg up on them with some particular feature, other products
:   can be vilified at will with no consequences for you.  right?

:   it was also a figure of speech to an irrational moron who perhaps would
:   react a bit more rationally with a bit kinder words, but you're not
:   reacting to what people say, but to your vilified image of them, so it
:   was of course lost on you.  I should have known, but I keep holding out
:   for some remaining shreds of decency in people, no matter what they have
:   done.  you obviously don't, and in some case you obviously know better.

: | Your writing does not make any sense (no news here).
: | 
: | [... usual naggum drivels, elided]

:   geez.  but I'm actually happy you had to attack me personally. 

*I* am attacking personally you? After all the name calling that you
did in this thread towards me?

I could find this funny, if I were the kind of person that takes fun
at other persons serious problems.

Regards
P.
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3143227329979631@naggum.no>
* Erik Naggum
| I don't always get such confirmations, but I have yet to see any evidence
| of malice behind that.  of course, if you don't _need_ evidence of
| malice, but assume it by default, [and the rest of the sentence, which
| you elided for reasons only you can fully understand:] which you do if I
| read you right, you would need evidence of its absence, and three
| instances of malice-by-default is pretty good evidence that Franz Inc is
| maliciously ignoring you.

* Pierpaolo Bernardi
| Malice?  Erik, please stop attributing to people intentions which are
| only products of your fantasy.

  for one who gives advice, you sure are lousy at taking it, but if you
  think the best way to deal with stuff you don't agree with is to do the
  same to other people, I have all the evidence I need of any malice at
  work, and it is neither at Franz Inc nor my fantasy.

  if you don't understand that conditionals like the one you reacted so
  blindly to are intended to make you think about your position, and
  instead start to defend it as if the condition were true, what am I
  supposed to think but that it is indeed true?  if you want to argue
  against what you call "products of your fantasy", at the very least: do
  yourself the favor of not denying them so vociferously.  if they are
  indeed products of my fantasy, even only for the purpose of getting to
  know a little better what kind of person I'm dealing with, it behooves
  you to _do_ better, and it's a lot simpler than defending yourself, too.

  in this particular case, I wanted you to think about the implication that
  you don't need evidence of malice because you act as if the lack of a
  response to a bug report was somehow a serious flaw that you could and
  should blame somebody else for.  common reasons not to get replies are:
  that you send from a broken mail system, use some stupid spam-avoiding
  fake mailbox, or other faults with the initiator of the mail.  assuming
  faults with the other party is sometimes something it is important to
  learn if someone does habitually.  I guess I know that that's precisely
  what you tend towards, now, as in the following:

| *I* am attacking personally you?  After all the name calling that you
| did in this thread towards me?

  yes, Pierpaolo, you were actually attacking me.  that's a fact.  if you
  think it is excusable, or defensible, or irrelevant compared to other
  things, that's an entirely different issue, even if you don't understand
  such simple ethical issues.  the fact remains, and I'm just enjoying the
  spectacle of someone who claims to be so ardently opposed to personal
  attacks even try to deny the fact of his own behavior, even defending his
  denial as the fault of he whom he attacks.  I find such behavior
  fascinating to watch, but at the very least, I don't conflate clean and
  simple facts with my wishes, whatever else you may fancy accusing me of.

| I could find this funny, if I were the kind of person that takes fun
| at other persons serious problems.

  you already started with a curious case of projection in the article I'm
  responding to, then denial of facts, and now you're trying very hard to
  get an upper hand by blaming me for everything.  funny, indeed, but then
  again, I _do_ think it's hilariously funny to expose people who try _so_
  hard to appear above reproach.

  next time, Pierpaolo, just grow a clue and consider that you do yourself
  a favor by doing something that cannot be attacked in ways you don't
  like, such as not attributing malice to people you don't know at all, and
  above all: don't let so much of _your_ personal response patterns show
  through -- there's no fun in people who are _actually_ mad.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933328741.433744@fire-int>
Erik Naggum (····@naggum.no) wrote:
: * ········@cli.di.unipi.it (Pierpaolo Bernardi)

:   your ardent defense of the CLISP implementation no matter what the
:   criticism and your destructiveness towards other players in the Common
:   Lisp market indicate [...]

Could you please point out to me where I have defended ardently Clisp?
I would be very surprised if you could find any such references.

P.
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <863dygjq8d.fsf@g.local>
Pierpaolo Bernardi wrote:

> Gareth McCaughan (················@pobox.com) wrote:
>: Johan Kullstam wrote:
> 
>:> CMUCL is good, but it's a bit industrial strength.  the compiler is
>:> wordy and complains a lot about type inferences and such.  this is, to
>:> be sure, useful since CMUCL can produce fast number-crunching code,
>:> but may be a bit overwhelming to the neophyte.  CMUCL exists for a few
>:> popular flavors of unix and afaik does not do windows.
> 
>: On the other hand, CMUCL does have a native-code compiler, and
>: CLISP doesn't, so if performance matters CMUCL may win.
> 
> In my understanding, `industrial strenght' means correct and supported.

Is this meant to be a reply to what I wrote, or to what Johan wrote?
(I wasn't the one who said that CMU CL was "a bit industrial strength".)

>: (For some
>: purposes CLISP will likely be *faster* than CMUCL, though; I'm
>: told its bignums are especially good.)
> 
> try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
> and Allegro.

I presume this would simply confirm what I already said.

>: The version of CLISP I have (which is admittedly a bit old) also
>: doesn't grok inline functions. 
> 
> Allegro doesn't grok them either. 
> 
> In Clisp, functions are inlined by the file compiler, but not by
> COMPILE (as far as I can remember, Clisp has always worked in this
> way).

That's interesting; I hadn't realised it was so. Thanks for the
information.

>: This, plus the fact that "built-in"
>: operations tend to be much faster than build-it-yourself ones,
>: does slightly discourage one from constructing abstractions;
>: that's a shame.
> 
> I don't understand this.  You are complaining that built-in fuctions
> are too fast?

No, I'm complaining that user code is too slow.

> Should be easy to fix.  Just insert a delay in the interpreter loop
> whenever a built-in function is called.  You may even make this delay
> so big as to make build-it-yourself functions more convenient, thus
> encouraging constructing abstractions.

A brilliant idea. I'll do it at once.

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <932748524.839709@fire-int>
Gareth McCaughan (················@pobox.com) wrote:
: Pierpaolo Bernardi wrote:
: > Gareth McCaughan (················@pobox.com) wrote:
: >: Johan Kullstam wrote:

: > In my understanding, `industrial strenght' means correct and supported.

: Is this meant to be a reply to what I wrote, or to what Johan wrote?
: (I wasn't the one who said that CMU CL was "a bit industrial strength".)

Sorry. I mixed two replies which should have been better separated.

: > try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
: > and Allegro.

: I presume this would simply confirm what I already said.

Yes.  But maybe more than you thought.

: > I don't understand this.  You are complaining that built-in fuctions
: > are too fast?

: No, I'm complaining that user code is too slow.

I know of no bytecoded lisp faster than clisp.  

P.
From: Damond Walker
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <LJ1m3.65$5K4.13474@typhoon1.gnilink.net>
Pierpaolo Bernardi wrote in message <················@fire-int>...

>: > try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
>: > and Allegro.


    Quick question...  Is (random (expt 10 500)) supposed to be a slow
operation?  Runs almost instantly with clisp on my little machine (dual
pentium-133 -- does clisp support SMP systems?).  I ran this straight at the
prompt.  Is there some kind of generally accepted benchmark for clisp,
cmucl, etc.?

        Damond
From: Lars Marius Garshol
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <wkogh35kd3.fsf@ifi.uio.no>
* Damond Walker
| 
| Quick question...  Is (random (expt 10 500)) supposed to be a slow
| operation?  

I don't think so. It runs immediately on CMUCL, CLISP and Allegro,
although Allegro chokes with some floating-point-related complaint.

--Lars M.
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87pv1ji3kv.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> : No, I'm complaining that user code is too slow.
> 
> I know of no bytecoded lisp faster than clisp.  

This is not a contradiction to his complaint.

While bytecodes buy you a couple of things (like smaller image size,
smaller compiler-complexity and often greater portability), you take
a performance hit against native code.  Those are the trade-offs.
This doesn't make CLISP a bad CL implemenation.  No need to defend it
so avidly.  Neither is CLISP always better than CMU CL, nor is CMU CL
always better than CLISP.  There are certain things that CMU CL does
better, and certain things that CLISP does better, and yet other
things that none of them are really good at.  So you have to live
with the trade-offs and choose an implementation that satisfies your
requirements the most.  That's life.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Christopher R. Barry
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <877lnrm8us.fsf@2xtreme.net>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> I know of no bytecoded lisp faster than clisp.  

I know of no Common Lisp slower than CLISP.

The reason why people are complaining that user functions in CLISP are
a lot slower than predefined functions is because nearly all of the
CLISP predefined functions are written in C, while in a "real" Lisp
the Lisp is written in Lisp and the compiler is smart enough to
efficiently compile its own Lisp and user Lisp.

As for the bignum performance of CLISP; yes, it's amazing. Try taking
the factorial of 100000 sometime. CLISP will return the 500k result in
under 13 minutes on a 200MHz MMX Pentium and the heap will never grow
above a few MB. With CMU or Allegro you'll need gigs of swap to do
this and it would be way slower. If you look at Haible's page you'll
see he's written a fair ammount C/C++ numerical code and he's authored
many mathematical papers so has particular skill in the area. (The
CLISP bignum routines are all C.)

CLISP's floats are about 10-40x slower than CMUCL's, which in many
cases are 3-4x faster than the next best commercial offering. But who
cares? I'd trade it for Allegro's CLOS performance anyday.

Christopher
From: Christopher R. Barry
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87vhbap7yj.fsf@2xtreme.net>
Tim Bradshaw <···@tfeb.org> writes:

 > * Christopher R Barry wrote:
 >> As for the bignum performance of CLISP; yes, it's amazing. Try taking
 >> the factorial of 100000 sometime. CLISP will return the 500k result in
 >> under 13 minutes on a 200MHz MMX Pentium and the heap will never grow
 >> above a few MB. With CMU or Allegro you'll need gigs of swap to do
 >> this and it would be way slower. 

 > I don't have figures for Allegro, but CMUCL 18a on a 333MHz sparc took
 > 13 minutes and some seconds. Given 64Mb between GCs it never got above
 > 67,843,456 bytes dynamic space in use, and never above 691,920 bytes
 > retained after GC.  So it would probably have run with a dynamic space
 > of 2Mb or so but it would have GCd a lot more often and probably
 > runtime would have been GC dominated.

 > This is a bit slower and a bit bigger but definitely neither gigs of        
 > swap nor way slower.

My box here is a 64MB 200MHz MMX Pentium. Last I tried it with CMU CL I
had 128MB swap and it exhausted all my swap and all the windows in X
started dying and it took me several minutes to get an xterm up so I
could kill -9 the damn thing. I've got 512MB swap now and I'll try it
after I sleep and post the result. [It's 5am in California, one of those 
Friday nights....]

Christopher
From: Tim Bradshaw
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ey3so6ecfyg.fsf@lostwithiel.tfeb.org>
* Christopher R Barry wrote:

> My box here is a 64MB 200MHz MMX Pentium. Last I tried it with CMU CL I
> had 128MB swap and it exhausted all my swap and all the windows in X
> started dying and it took me several minutes to get an xterm up so I
> could kill -9 the damn thing. I've got 512MB swap now and I'll try it
> after I sleep and post the result. [It's 5am in California, one of those 
> Friday nights....]

Does CMUCL have some fancy GC on x86 now?  That sounds like exactly
the sort of syndrome you  might get for a generational collector which
where intermediate results are being tenured bogusly.

--tim
From: Christopher R. Barry
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87aeslhlx3.fsf@2xtreme.net>
Tim Bradshaw <···@tfeb.org> writes:

>  * Christopher R Barry wrote:
>> My box here is a 64MB 200MHz MMX Pentium. Last I tried it with CMU CL I
>> had 128MB swap and it exhausted all my swap and all the windows in X
>> started dying and it took me several minutes to get an xterm up so I
>> could kill -9 the damn thing. I've got 512MB swap now and I'll try it
>> after I sleep and post the result. [It's 5am in California, one of those 
>> Friday nights....]

> Does CMUCL have some fancy GC on x86 now?  That sounds like exactly
> the sort of syndrome you  might get for a generational collector which
> where intermediate results are being tenured bogusly.

Yes, x86 has gengc. I tried it again and it ran for about 15 minutes
without swapping out and then it filled all 512MB of my VM and I had to
kill it.

Christopher
From: Raymond Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <4nu2qrfs5e.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher R Barry <······@2xtreme.net> writes:

    Christopher> Yes, x86 has gengc. I tried it again and it ran for about 15 minutes
    Christopher> without swapping out and then it filled all 512MB of my VM and I had to
    Christopher> kill it.

I think, but I'm not sure, that the problem is trying to print out the 
number, not in computing it.  

There is at least one known test case where the gencgc on x86 leaks
memory.  The sparc port with it's simpler GC doesn't leak memory in
this case.

Ray
From: Simon Leinen
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <aau2qrysw9.fsf@limmat.switch.ch>
[on using 10000! as a Lisp bignum benchmark]

>>>>> "rt" == Raymond Toy <···@rtp.ericsson.se> writes:
> I think, but I'm not sure, that the problem is trying to print out the 
> number, not in computing it.  

Right, printing the result is generally more expensive than computing
it.  Allegro 5.0/Linux on a 266 MHz Pentium:

NOC(8): (declaim (optimize (speed 3) (space 1) (debug 0)))
T
NOC(9): (defun fac (x) (labels ((fac1 (x y) (declare (type unsigned-byte x y)) (if (= x 0) y (fac1 (1- x) (* x y))))) (fac1 x 1)))
FAC
NOC(10): (compile 'fac)
FAC
NIL
NIL
NOC(11): (progn (time (setq result (fac 10000))) (values))
; cpu time (non-gc) 10,250 msec user, 40 msec system
; cpu time (gc)     1,510 msec user, 40 msec system
; cpu time (total)  11,760 msec user, 80 msec system
; real time  12,683 msec
; space allocation:
;  1 cons cell, 0 symbols, 78,713,384 other bytes, 0 static bytes
NOC(12): (progn (time (print result)) (values))

[35660 digits omitted]
; cpu time (non-gc) 14,090 msec user, 40 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  14,090 msec user, 40 msec system
; real time  14,630 msec
; space allocation:
;  9 cons cells, 0 symbols, 15,008 other bytes, 1646208 static bytes
NOC(13): 

Regards,
-- 
Simon.
From: Tim Bradshaw
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ey3lnc4cmws.fsf@lostwithiel.tfeb.org>
* Christopher R Barry wrote:

> Yes, x86 has gengc. I tried it again and it ran for about 15 minutes
> without swapping out and then it filled all 512MB of my VM and I had to
> kill it.

OK, well this looks like the difference between clisp and cmucl is
nothing really to do with bignum performance but more to do with this
particular problem being a screw case for CMUCL's more sophisticated
GC.  I suspect if you can turn of the generational stuff or tweak the
parameters so it isn't tenuring a large number of intermediate
results, then it will perform with in a small factor of clisp, as it
does on sparc.

--tim
From: R. Toy
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379B9B34.C2D50203@mindspring.com>
Christopher R. Barry wrote:
> 
> CLISP's floats are about 10-40x slower than CMUCL's, which in many
> cases are 3-4x faster than the next best commercial offering. But who
> cares? 

I do.  So do several of the key developers of CMUCL.  Plus, it's the 
only Lisp I know that doesn't GC to death when working with complex 
numbers.  This is a major win for me.

Ray
From: Bruno Haible
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7nil93$dia$1@news.u-bordeaux.fr>
Christopher R. Barry <······@2xtreme.net> wrote:
>
>> I know of no bytecoded lisp faster than clisp.  
>
> I know of no Common Lisp slower than CLISP.

Then try Poplog (http://www.elwood.com/alu/table/systems.htm#poplog).
Last time I tried it, it ran about the same speed as CLISP.

Aside from that, CLISP is more portable than other implementations. You
will see what that's worth when you buy a new machine with an IA-64 CPU.
How long, do you think, will it take for CMUCL's, Allegro CL's, or
LispWorks's compiler to be modified to generate code for that CPU?
For CLISP, you'll have to modify a few #defines in the include file - or
it could even be completely autoconfiguring by then -, and you compile it.

> The reason why people are complaining that user functions in CLISP are
> a lot slower than predefined functions is because nearly all of the
> CLISP predefined functions are written in C

Yeah, I understand that. You optimize 700 functions by hand for them, and
then they complain about it. It's a pity.

> while in a "real" Lisp the Lisp is written in Lisp and the compiler is
> smart enough to efficiently compile its own Lisp and user Lisp.

... and a "real" Lisp carries its own operating system, and its own windowing
system. And is expensive. And doesn't run on stock hardware...

Can you please put aside these prejudices about "real" Lisps which you
borrowed from past decades?

CLISP is different.

* It runs fine in an xterm, and is therefore accessible to non-Emacs users.

* Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
  times as high as a shell's startup time. You can therefore use it as a
  script interpreter (with structures and CLOS), or as a CGI interpreter.

* It supports Unicode, not just as an add-on, but right from the start: The
  `character' type is Unicode (16 bit). CLISP is therefore the instrument of
  choice for manipulating HTML or XML text.

* Its CLX implementation uses libX11 and it therefore up-to-date with all
  recent X11 developments.

Yes, CLISP is different.

                         Bruno                       http://clisp.cons.org/
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <pc7n3.56915$AU3.1409523@news2.giganews.com>
On 26 Jul 1999 21:51:31 GMT, Bruno Haible <······@clisp.cons.org> wrote:
>Christopher R. Barry <······@2xtreme.net> wrote:
>>
>>> I know of no bytecoded lisp faster than clisp.  
>>
>> I know of no Common Lisp slower than CLISP.
>
>Then try Poplog (http://www.elwood.com/alu/table/systems.htm#poplog).
>Last time I tried it, it ran about the same speed as CLISP.
>
>Aside from that, CLISP is more portable than other implementations. You
>will see what that's worth when you buy a new machine with an IA-64 CPU.
>How long, do you think, will it take for CMUCL's, Allegro CL's, or
>LispWorks's compiler to be modified to generate code for that CPU?
>For CLISP, you'll have to modify a few #defines in the include file - or
>it could even be completely autoconfiguring by then -, and you compile it.

It seems to me that CLISP subscribes somewhat to Richard Gabriel's
"Worse is Better" edict.

<http://www.ai.mit.edu/docs/articles/good-news/good-news.html>

CLISP may not be as good as the "other guys" in terms of performance,
but if it can be readily ported to new architectures as a side-effect
of people porting C compilers over, then CLISP may "win the race."

One thing I'm not clear on: Erik Naggum "blasts" CLISP pretty heavily
for nonconformance with X3J13.  I'm not sure to what extent this
represents:
a) His biases,
b) His perception of *past* noncompliant states of CLISP,
c) Continuing noncompliance of CLISP with X3J13.

Free software has some tendancy not to be *really* compliant with
standards.  After all, adding functionality is a whole lot more fun
than:
  a) Writing up regression tests,
  b) Fixing bugs relating to nonstandard behaviour,
  c) Rerunning regression tests to make sure new
     functionality/optimization didn't break anything.

>> The reason why people are complaining that user functions in CLISP are
>> a lot slower than predefined functions is because nearly all of the
>> CLISP predefined functions are written in C
>
>Yeah, I understand that. You optimize 700 functions by hand for them, and
>then they complain about it. It's a pity.

It's something of a pity that the optimization took place in C.

It would be *really* slick if CLISP were largely "self-hosting," with
the main body of code written in Lisp, producing C code that was
optimized by a Lisp-based optimizer.

That would permit extending the implementation with "real fast"
operators where needed.

>> while in a "real" Lisp the Lisp is written in Lisp and the compiler is
>> smart enough to efficiently compile its own Lisp and user Lisp.
>
> ... and a "real" Lisp carries its own operating system, and its own
> windowing system. And is expensive. And doesn't run on stock
> hardware... 
>
>Can you please put aside these prejudices about "real" Lisps which you
>borrowed from past decades?

Symbolics and Lisp Machines went out of business as a result of some
of the side-effects of those "prejudices," as well as because "Worse
Is Better."

>CLISP is different.
>
>* It runs fine in an xterm, and is therefore accessible to non-Emacs users.
>
>* Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
>  times as high as a shell's startup time. You can therefore use it as a
>  script interpreter (with structures and CLOS), or as a CGI interpreter.
>
>* It supports Unicode, not just as an add-on, but right from the start: The
>  `character' type is Unicode (16 bit). CLISP is therefore the instrument of
>  choice for manipulating HTML or XML text.
>
>* Its CLX implementation uses libX11 and it therefore up-to-date with all
>  recent X11 developments.
>
>Yes, CLISP is different.

If CLISP can take advantage of those aspects of "Worse is Better" that
it can, without damaging performance *too* much, it can do well.

I'm not sure how easy/hard it is to extend it with further functions;
if a tuning process shows that there are a couple of functions that
should be turned into inline C so as to immensely improve performance,
and there is some support for automagically generating that C, then it
can be kept Fast Enough.

-- 
"Bawden is misinformed.  Common Lisp has no philosophy.  We are held
together only by a shared disgust for all the alternatives."
-- Scott Fahlman, explaining why Common Lisp is the way it is....
········@hex.net- <http://www.ntlug.org/~cbbrowne/langlisp.html>
From: ······@clisp.cons.org
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7nk5k6$sqj$1@news.u-bordeaux.fr>
Christopher Browne <········@hex.net> wrote:
>
> It seems to me that CLISP subscribes somewhat to Richard Gabriel's
> "Worse is Better" edict.

Not that much. My interpretation of "Worse is Better" is that it applies to
the quality of the API presented to the programmer. This is definitely not
the way it is done in CLISP.

However, "Less is Better" is true for CLISP, to some extent. CLISP does not
implement features which would be hard to maintain and are not essential in
some way. Simply because we acknowledge that development resources for CLISP
are scarce. Examples of this attitude are:
  - No built-in editor, because every user has his/her own preferred editor
    anyway.
  - No support for `update-instance-for-redefined-class' because this would
    cause performance penalties in the rest of CLOS, and it's not used anyway.
  - No `defsystem', since there is no standardized spec for it, and since
    `make' is fine for compiling Lisp programs.

> It's something of a pity that the optimization took place in C.
>
> It would be *really* slick if CLISP were largely "self-hosting," with
> the main body of code written in Lisp, producing C code that was
> optimized by a Lisp-based optimizer.
> ...
> I'm not sure how easy/hard it is to extend it with further functions;
> if a tuning process shows that there are a couple of functions that
> should be turned into inline C so as to immensely improve performance,
> and there is some support for automagically generating that C, then it
> can be kept Fast Enough.

What you describe here is the architecture of GCL. Do you use GCL? Do you
help the GCL maintainters, extend the GCL compiler, and so on? If not,
why not?

                    Bruno Haible                         http://clisp.cons.org/
From: Rainer Joswig
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <joswig-2707991346500001@194.163.195.67>
In article <············@news.u-bordeaux.fr>, ······@clisp.cons.org wrote:

> are scarce. Examples of this attitude are:

>   - No support for `update-instance-for-redefined-class' because this would
>     cause performance penalties in the rest of CLOS, and it's not used anyway.

Sigh.
From: Joerg-Cyril Hoehle
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <qkphfmqcgqo.fsf@tzd.dont.telekom.spam.de.me>
Hi Rainer,

······@lavielle.com (Rainer Joswig) writes:
[Bruno Haible of CLISP fame wrote:]
> >   - No support for `update-instance-for-redefined-class' because this would
> >     cause performance penalties in the rest of CLOS, and it's not used anyway.

> Sigh.

My reply may be a little of topic.

See how Schemers "discussed" (fought) the cost of R5RS'
DYNAMIC-UNWIND.  There is a real cost to some operations.  Every
implementation makes design decisions.  Some implementations may
decide to drop a costly feature.  Others go and design another
language (Dylan) which possibilities for sealing etc.

If you were a Smalltalker, would you say 'Sigh' to an implementation
that wouldn't provide 'become' whose use is strictly discouraged in
every style guide?  The design choices underlying the implementation
may make this operation intractable or at least too expensive.

Did you know that state of the art commercial Smalltalk
implementations don't recompile children methods when a class is
redefined, so that the images may crash or cause weird behaviour
because these old methods reference slots at wrong (obsolete) offsets?

Sigh? Really?

Regards,
	Jorg Hohle
Telekom Research Center -- SW-Reliability
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <871zdterr2.fsf@orion.dent.isdn.cs.tu-berlin.de>
············@tzd.dont.telekom.spam.de.me (Joerg-Cyril Hoehle) writes:

> Hi Rainer,
> 
> ······@lavielle.com (Rainer Joswig) writes:
> [Bruno Haible of CLISP fame wrote:]
> > >   - No support for `update-instance-for-redefined-class' because this would
> > >     cause performance penalties in the rest of CLOS, and it's not used anyway.
> 
> > Sigh.
> 
> My reply may be a little of topic.
> 
> See how Schemers "discussed" (fought) the cost of R5RS'
> DYNAMIC-UNWIND.  There is a real cost to some operations.  Every
> implementation makes design decisions.  Some implementations may
> decide to drop a costly feature.  Others go and design another
> language (Dylan) which possibilities for sealing etc.

[ Smalltalk stuff snipped ]

> Sigh? Really?

I don't know what Rainer's sigh was trying to convey. I on the other
hand would not sigh at the fact that CLISP doesn't provide
update-instance-for-redefined-class.  That is an implementation
decission (which brings CLISP out of line with the standard, but
probably might still qualify it as a subset), and if the user
community of CLISP prefers better CLOS performance (relative to CLISP
with u-i-f-r-c) to the presence of this update protocol, who am I to
quarrel with that, since I'm not an active user of CLISP, and my
priorities are different from those of the existing CLISP community it 
seems.

I would however sigh at the second part of Bruno's sentence, if I
should take "... and it's not used anyway" as an assertion on his
part, that u-i-f-r-c isn't used in the CL community as a whole.  That
would make assumptions about the usage patterns of a whole number of
people, many with different interests and priorities than Bruno's.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Vassili Bykov
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <uso69dfqk.fsf@objectpeople.com>
············@tzd.dont.telekom.spam.de.me (Joerg-Cyril Hoehle) writes:
> 
> If you were a Smalltalker, would you say 'Sigh' to an implementation
> that wouldn't provide 'become' whose use is strictly discouraged in
> every style guide?  The design choices underlying the implementation
> may make this operation intractable or at least too expensive.

Your point being...?  You can design around the absense of #become: so
it is not strictly necessary--it is there as an artifact of the times
when it was dirt cheap to implement in the only existing
implementation--yet so far every implementation has been providing it,
though with somewhat varying semantics.

> Did you know that state of the art commercial Smalltalk
> implementations don't recompile children methods when a class is
> redefined, so that the images may crash or cause weird behaviour
> because these old methods reference slots at wrong (obsolete) offsets?

What implementations are you talking about?  This is plain wrong for
*every* commercial one in existence.  Those worth their salt both
recompile the methods and mutate existing instances to match the new
instance structure--though not as gracefully as
UPDATE-INSTANCE-FOR-REDEFINED-CLASS would allow.

--Vassili
From: Joerg-Cyril Hoehle
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <qkpg127ly73.fsf@tzd.dont.telekom.spam.de.me>
Vassili Bykov <·······@objectpeople.com> writes:
> > Did you know that state of the art commercial Smalltalk
> > implementations don't recompile children methods when a class is
> > redefined, so that the images may crash or cause weird behaviour
> > because these old methods reference slots at wrong (obsolete) offsets?
> 
> What implementations are you talking about?  This is plain wrong for
> *every* commercial one in existence.  Those worth their salt both
> recompile the methods and mutate existing instances to match the new
> instance structure--though not as gracefully as
> UPDATE-INSTANCE-FOR-REDEFINED-CLASS would allow.

What you say in the last sentence is correct, and I think I wasn't
clear enough.  I meant the class tree *below* a given class that is
being changed.  The children classes weren't recompiled in Visual
Works 2.0.

The problem was about inserting instance variables in the middle of
their list or removing instance variables (thus shuffling the slots
around) and then running into problems with children classes.

Regards,
	Jorg Hohle
Telekom Research Center -- SW-Reliability
From: Vassili Bykov
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ud7xa2m7c.fsf@objectpeople.com>
············@tzd.dont.telekom.spam.de.me (Joerg-Cyril Hoehle) writes:
> The children classes weren't recompiled in Visual
> Works 2.0.
> 
> The problem was about inserting instance variables in the middle of
> their list or removing instance variables (thus shuffling the slots
> around) and then running into problems with children classes.

The only reason for this I can think about would be some in-house
tweaks you had in your image.  VW 1.0 definitely handled this
correctly--I did some work in it that required a study of ClassBuilder
(the class responsible for rebuilding a class after a definition).
The method responsible for recompiling the methods and mutating
instances called itself recursively on all the subclasses of the class
being rebuilt.  Later, I lived through all the later VW versions up to
3.0, with no problems of the kind.  Squeak, also coming from the PARC
ST-80 code base, properly recompiles subclasses as well.  I guess this
must have been implemented correctly back in the PARC days.  It is
just a recursive call anyway, nothing hi-tech.  [Sorry for the off-topic]

Cheers,

--Vassili
From: Thomas A. Russ
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ymiemhsd496.fsf@sevak.isi.edu>
······@clisp.cons.org writes:

>   - No support for `update-instance-for-redefined-class' because this would
>     cause performance penalties in the rest of CLOS, and it's not used anyway.

I would take issue with this.  It is one reason that the Loom software
will not run on CLISP.  We use UPDATE-INSTANCE-FOR-REDEFINED-CLASS in
our system.

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Robert Monfera
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379F7DE0.4C1D1578@fisec.com>
"Thomas A. Russ" wrote:
> 
> ······@clisp.cons.org writes:
> 
> >   - No support for `update-instance-for-redefined-class' because this would
> >     cause performance penalties in the rest of CLOS, and it's not used anyway.
> 
> I would take issue with this.  It is one reason that the Loom software
> will not run on CLISP.  We use UPDATE-INSTANCE-FOR-REDEFINED-CLASS in
> our system.

Although I respect CLISP and Corman Lisp a lot, CLOS seems to suffer
most from the lack of compliance in these implementations (probably
because Corman CLOS+MOP is based on Closette, I don't know about
CLISP).  

In what way would performance penalties occur if the dynamic features of
CLOS were implemented (maybe as an -ansi option)?  Is it not a
comparison of performance penalty of exploiting the functionality versus
not having the functionality at all?

Maybe there is a Closette-based CLOS compatibility package out there?

Robert
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3142055861691238@naggum.no>
* ······@clisp.cons.org (Bruno Haible)
| How long, do you think, will it take for CMUCL's, Allegro CL's, or
| LispWorks's compiler to be modified to generate code for that CPU?

  what an odd way to put it.  of course real compilers aren't "modified to
  generate code" for new processors.  ports are generally prepared some
  time before a new processor becomes availble, if there is demand for it,
  and finalized when the vendor can get their hands on a machine.  that is
  usually some time before the general market can purchase the computers,
  since vendors tend to believe that their markets will increase if there
  are good development tools available for them when they hit the streets.
  this all leads to the obvious conclusion that if there is evidence of
  demand for Allegro CL, say, for IA-64, there will be an Allegro CL
  available for IA-64 before any random user can compile CLISP for it,
  provided he purchases a sufficiently good C compiler first.  or do you
  think GCC will be available for IA-64 as the first compiler that does
  really good code?  last time I looked, the processor manufacturers again
  prefer to do their own compilers, since much interesting work has taken
  place in compiler technology that GCC/EGCS just hasn't caught up with,
  and most of these new processors are so hard to optimize that the work
  necessary to port GCC exceeds the work necessary to roll their own, not
  to mention the usefulness of producing machine code directly instead of
  going through the assembler.

  I'd say your argument backfired on you.

| CLISP is different.
| 
| * It runs fine in an xterm, and is therefore accessible to non-Emacs users.

  huh?  which other Common Lisp doesn't?

| * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
|   times as high as a shell's startup time. You can therefore use it as a
|   script interpreter (with structures and CLOS), or as a CGI interpreter.

  the startup time for Allegro CL on my system is 0.06 seconds.  the
  startup time for bash on my system is 0.02 seconds.  wow, you beat my
  factor 3 with a factor 2.5.  I'm _so_ impressed.

| * It supports Unicode, not just as an add-on, but right from the start: The
|   `character' type is Unicode (16 bit). CLISP is therefore the instrument of
|   choice for manipulating HTML or XML text.

  that's odd.  Allegro CL also has 16-bit characters if you ask for it, and
  it has had that for a good number of years.  yes, it's doing Unicode
  under Windows.  I'm currently working on Unicode support for the Unix
  international edition.

| * Its CLX implementation uses libX11 and it therefore up-to-date with all
|   recent X11 developments.

  I don't use CLX, but this sounds like a good thing for those who do.

| Yes, CLISP is different.

  I'd say it's a little less different than you think.  if you want to
  attack others with stuff like "Can you please put aside these prejudices
  about "real" Lisps which you borrowed from past decades?" you might want
  to update your view of the "real" Lisps out there.  you're not fighting
  against Lisp machines, anymore.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Bruno Haible
Subject: Re: compiler technology (was: Re: CMU CL vs. CLISP?)
Date: 
Message-ID: <7nkbbg$ulq$1@news.u-bordeaux.fr>
To my question:
>| How long, do you think, will it take for CMUCL's, Allegro CL's, or
>| LispWorks's compiler to be modified to generate code for that CPU?

Erik Naggum <····@naggum.no> replied:
>  what an odd way to put it.  of course real compilers aren't "modified to
>  generate code" for new processors.  ports are generally prepared some
>  time before a new processor becomes availble, if there is demand for it,
>  and finalized when the vendor can get their hands on a machine.  that is
>  usually some time before the general market can purchase the computers,
>  since vendors tend to believe that their markets will increase if there
>  are good development tools available for them when they hit the streets.
>  this all leads to the obvious conclusion that if there is evidence of
>  demand for Allegro CL, say, for IA-64, there will be an Allegro CL
>  available for IA-64 before any random user can compile CLISP for it,

Hardware vendors do believe in the importance of development tools. C compiler
writers and Linux porters sometimes get a brand-new machine for free. Other
categories of application vendors, probably Lisp compiler writers as well,
typically have to rent such a pre-release hardware. Maybe the Allegro CL
demand for IA-64 will be sufficient for its vendor to pay such a hardware.
But I was also talking about CMUCL - how should they get at such a machine?

>  before any random user can compile CLISP for it,
>  provided he purchases a sufficiently good C compiler first.

Any developer needs to buy the C compiler, be it bundled with the OS or
sold separately.

My point is: IA-64 needs heavy changes to the code generator, because the
instructions must be grouped into triplets of 42 bits each. The testing alone
of such a modified compiler will take longer than the entire porting of CLISP.

>  or do you think GCC will be available for IA-64 as the first compiler
>  that does really good code?

This question is really off-topic because CLISP can be compiled with any
ANSI C or C++ compiler, therefore the availability of gcc does not matter.

But anyway, I don't mind discussing this. I don't think GCC will be the
first compiler for IA-64, but it won't come very late either.

>  last time I looked, the processor manufacturers again
>  prefer to do their own compilers

At least IBM is an active contributor to GCC. And Intel has contributed a
lot to the gcc derivative called `pgcc', but that hasn't been completely
integrated into GCC (AFAIK).

>  since much interesting work has taken place in compiler technology
>  that GCC/EGCS just hasn't caught up with,

I don't know about which technology you are talking. Recently, Be Inc.
has dropped MetroWerks as C compiler for the i586 version of BeOS and
replaced it with GCC.

>  I'd say your argument backfired on you.

I don't think so. How long did it take for CMUCL to be ported to i386?
Paul Werkowski did heroic efforts for one long year.


                Bruno                        http://clisp.cons.org/
From: Erik Naggum
Subject: Re: compiler technology (was: Re: CMU CL vs. CLISP?)
Date: 
Message-ID: <3142072824912106@naggum.no>
* Bruno Haible
| But I was also talking about CMUCL - how should they get at such a machine?

  your point is certainly valid for CMUCL, but your extrapolation to other
  native Common Lisp implementations leaves something to be desired in its
  applicability.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Tim Bradshaw
Subject: Re: compiler technology (was: Re: CMU CL vs. CLISP?)
Date: 
Message-ID: <nkjyag2kvhl.fsf@tfeb.org>
······@clisp.cons.org (Bruno Haible) writes:


> My point is: IA-64 needs heavy changes to the code generator, because the
> instructions must be grouped into triplets of 42 bits each. The testing alone
> of such a modified compiler will take longer than the entire porting of CLISP.

This is not a loaded question, I actually want to know this!

When clisp is rebuilt for ia64 will it be a 64-bit system, in the
sense of haivng bigger address space and I guess bigger fixums &c, or
will it still be 32-bit?  Is there a 64-bit clisp now (on sparc or
other currently-64-bit machines, for which compilation technology &
hardware already exists)?

Thanks

--tim
From: Bruno Haible
Subject: Re: compiler technology (was: Re: CMU CL vs. CLISP?)
Date: 
Message-ID: <7nkqvm$3dt$1@news.u-bordeaux.fr>
Tim Bradshaw <···@tfeb.org> asked:
>
> When clisp is rebuilt for ia64 will it be a 64-bit system, in the
> sense of haivng bigger address space and I guess bigger fixums &c, or
> will it still be 32-bit?

You will be able to compile clisp (like other software) as either a 32-bit
application or a 64-bit application. The code space needed for either will
probably the same, but the data (the memory images) will likely be 60% larger
in 64-bit mode.

Details at http://developer.intel.com/design/IA64/devinfo.htm

> Is there a 64-bit clisp now (on sparc or other currently-64-bit machines,
> for which compilation technology & hardware already exists)?

clisp runs in 64-mode on DEC/Compaq Alpha since 1993, and on Mips since 1997.

                   Bruno                        http://clisp.cons.org/
From: Tim Bradshaw
Subject: Re: compiler technology (was: Re: CMU CL vs. CLISP?)
Date: 
Message-ID: <ey3zp0gkbbb.fsf@lostwithiel.tfeb.org>
* Bruno Haible wrote:

> You will be able to compile clisp (like other software) as either a 32-bit
> application or a 64-bit application. The code space needed for either will
> probably the same, but the data (the memory images) will likely be 60% larger
> in 64-bit mode.

Sorry, this wasn't the question I meant to ask (bad wording on my
part).

What I meant was, when clisp is built on a 64-bit machine (with
suitable C-level support &c) will things like fixnums and so forth be
wider than they are on 32-bit machines?

Thanks

--tim
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379D7801.268E6160@iname.com>
Bruno Haible wrote:

> * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
>   times as high as a shell's startup time. You can therefore use it as a
>   script interpreter (with structures and CLOS), or as a CGI interpreter.

Now _THIS_ is news. It means one can forget about Scheme for scripts
I didn't know that.
[Hm. But what about fast _compiled_ scripts with fast startup?]

Thanks, Bruno.
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <876736w34m.fsf@orion.dent.isdn.cs.tu-berlin.de>
Fernando Mato Mira <········@iname.com> writes:

> > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
> >   times as high as a shell's startup time. You can therefore use it as a
> >   script interpreter (with structures and CLOS), or as a CGI interpreter.
> 
> Now _THIS_ is news. It means one can forget about Scheme for scripts
> I didn't know that.
> [Hm. But what about fast _compiled_ scripts with fast startup?]

As Erik has already pointed out, most implementations have a fast start
up time nowadays, if they are in the OS caches (like bash usually is).
Even CMU CL starts up and exits in around 0.5s on my machine nowadays,
and CMU CL is quite slow on start-up.  And if you use a resident image,
you can easily do scripting via it's socket interfaces, in even less
start-up time.  ACL starts up and exits in 0.075s, CLISP in 0.060s
(both without surpressing banner output).

It seems to me that scripting in CL isn't a question of start-up
times, but more of a nice scripting library.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Rob Warnock
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7nk5r9$6i2af@fido.engr.sgi.com>
Fernando Mato Mira  <········@iname.com> wrote:
+---------------
| Bruno Haible wrote:
| > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
| >   times as high as a shell's startup time. You can therefore use it as a
| >   script interpreter (with structures and CLOS), or as a CGI interpreter.
| 
| Now _THIS_ is news. It means one can forget about Scheme for scripts
+---------------

Why do you say *that*?  SIOD Scheme is similarly fast-starting (roughly
2.5 times Bourne Shell to do "hello world")...


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379DA730.8845ADBF@iname.com>
Rob Warnock wrote:

> Fernando Mato Mira  <········@iname.com> wrote:
> +---------------
> | Bruno Haible wrote:
> | > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
> | >   times as high as a shell's startup time. You can therefore use it as a
> | >   script interpreter (with structures and CLOS), or as a CGI interpreter.
> |
> | Now _THIS_ is news. It means one can forget about Scheme for scripts
> +---------------
>
> Why do you say *that*?  SIOD Scheme is similarly fast-starting (roughly
> 2.5 times Bourne Shell to do "hello world")...

Because it takes a lot of time and energy to get the Scheme people to understand
not everybody can live in a perfect world, I am starting to get tired, and
switching between CL and Scheme is too expensive (time+money).

If you could get implementations of CL for any combination of features you'd like
(speed, small footprint, JVM-compatible, C++ interfacing, continuations..) would
you care about Scheme?
From: Rob Warnock
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7nkahl$6tgqu@fido.engr.sgi.com>
Fernando Mato Mira  <········@iname.com> wrote:
+---------------
| Rob Warnock wrote:
| > | Now _THIS_ is news. It means one can forget about Scheme for scripts
| > +---------------
| > Why do you say *that*?  SIOD Scheme is similarly fast-starting (roughly
| > 2.5 times Bourne Shell to do "hello world")...
| 
| Because it takes a lot of time and energy to get the Scheme people to
| understand not everybody can live in a perfect world, I am starting to
| get tired, and switching between CL and Scheme is too expensive (time+money).
| 
| If you could get implementations of CL for any combination of features
| you'd like (speed, small footprint, JVM-compatible, C++ interfacing,
| continuations..) would you care about Scheme?
+---------------

I honestly don't know. It's certainly a question that keeps coming up
for me, too. I'm currently *much* more fluent in Scheme than in CL,
yet whenever I bump up against "practicalities" when hacking Scheme,
I find myself turning to the CLHS for inspiration. I personally prefer
the "look" of Scheme programs (mainly cuz it's a Lisp1), but could live
with CL if I had to.

It's a question that I don't think I'll answer any time soon, but I
also know it won't go away...


-Rob

p.s. Like most everyone who's dived into Scheme at any depth, I have my
own toy implementation bubbling on the back burner. I've been tempted to
call it "Common Scheme" (a Lisp1 subset of CL plus tail-call-opt. & call/cc),
but figured that would just get me flamed from *both* sides...  ;-}  ;-}

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		FAX: 650-933-0511
Mountain View, CA  94043	PP-ASEL-IA
From: Fernando Mato Mira
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <379DB234.321665A2@iname.com>
Rob Warnock wrote:

> p.s. Like most everyone who's dived into Scheme at any depth, I have my
> own toy implementation bubbling on the back burner. I've been tempted to
> call it "Common Scheme" (a Lisp1 subset of CL plus tail-call-opt. & call/cc),
> but figured that would just get me flamed from *both* sides...  ;-}  ;-}

You too! Hey, I'm driving up to the Area after SIGGRAPH. Care to visit some VCs?
:-I
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933329511.749004@fire-int>
Rob Warnock (····@rigden.engr.sgi.com) wrote:
: Fernando Mato Mira  <········@iname.com> wrote:
: +---------------
: | Bruno Haible wrote:
: | > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
: | >   times as high as a shell's startup time. You can therefore use it as a
: | >   script interpreter (with structures and CLOS), or as a CGI interpreter.
: | 
: | Now _THIS_ is news. It means one can forget about Scheme for scripts
: +---------------

: Why do you say *that*?  SIOD Scheme is similarly fast-starting (roughly
: 2.5 times Bourne Shell to do "hello world")...

�Because siod is a very minimal scheme implementation (and not
conforming to any standard), while Clisp implements a large subset of
ANSI Common Lisp?

P.
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3Xrn3.57940$AU3.1455044@news2.giganews.com>
On Tue, 27 Jul 1999 11:12:33 +0200, Fernando Mato Mira
<········@iname.com> wrote: 
>Bruno Haible wrote:
>
>> * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
>>   times as high as a shell's startup time. You can therefore use it as a
>>   script interpreter (with structures and CLOS), or as a CGI interpreter.
>
>Now _THIS_ is news. It means one can forget about Scheme for scripts
>I didn't know that.
>[Hm. But what about fast _compiled_ scripts with fast startup?]

I finally got CMU-CL installed on my Debian box (it's been a "not
flawless" install, generally...) and don't see a terribly perceptible
difference between the startup time for CMU-CL and that for CLISP.
-- 
If you stand in the middle of a library and shout "Aaaaaaaaargh" at the
top of your voice, everyone just stares at you. If you do the same thing
on an aeroplane, why does everyone join in?
········@hex.net- <http://www.ntlug.org/~cbbrowne/langlisp.html>
From: Christopher Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <zU5q3.65892$AU3.1830368@news2.giganews.com>
On Tue, 27 Jul 1999 11:12:33 +0200, Fernando Mato Mira
<········@iname.com> wrote: 
>Bruno Haible wrote:
>
>> * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
>>   times as high as a shell's startup time. You can therefore use it as a
>>   script interpreter (with structures and CLOS), or as a CGI interpreter.
>
>Now _THIS_ is news. It means one can forget about Scheme for scripts
>I didn't know that.
>[Hm. But what about fast _compiled_ scripts with fast startup?]

I finally got CMU-CL installed on my Debian box (it's been a "not
flawless" install, generally...) and don't see a terribly perceptible
difference between the startup time for CMU-CL and that for CLISP.
-- 
If you stand in the middle of a library and shout "Aaaaaaaaargh" at the
top of your voice, everyone just stares at you. If you do the same thing
on an aeroplane, why does everyone join in?
········@hex.net- <http://www.ntlug.org/~cbbrowne/langlisp.html>
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <86k8rnov3m.fsf@g.local>
Bruno Haible wrote:

[all snipped, in fact]

Why is it that when someone says "CLISP is great, but it's
rather slow for many things" people jump up ans say "That's
unfair! CLISP is great!" ?

CLISP is a lovely system. It's just a pity it does many things
so slowly. (I am aware that many of its benefits are consequences
of the same decisions that lead also to its slowness.)

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933037707.962221@fire-int>
Gareth McCaughan (················@pobox.com) wrote:

: Why is it that when someone says "CLISP is great, but it's
: rather slow for many things" people jump up ans say "That's
: unfair! CLISP is great!" ?

This has not been the case this time.

: CLISP is a lovely system. It's just a pity it does many things
: so slowly. (I am aware that many of its benefits are consequences
: of the same decisions that lead also to its slowness.)

But you have not complained that Clisp is slow!  You have complained
that some part of Clisp are too fast. 

How could be that you don't see the strangeness of this statement of
yours?

P.
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <861zdtwwyi.fsf@g.local>
Pierpaolo Bernardi wrote:

>: Why is it that when someone says "CLISP is great, but it's
>: rather slow for many things" people jump up ans say "That's
>: unfair! CLISP is great!" ?
> 
> This has not been the case this time.

It's how it looks to me.

>: CLISP is a lovely system. It's just a pity it does many things
>: so slowly. (I am aware that many of its benefits are consequences
>: of the same decisions that lead also to its slowness.)
> 
> But you have not complained that Clisp is slow!  You have complained
> that some part of Clisp are too fast. 

If you really think that was my complaint, I can only conclude that
either I am a much worse communicator than I thought or your English
comprehension isn't very good.

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Tim Bradshaw
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <ey3hfmqd5an.fsf@lostwithiel.tfeb.org>
> But you have not complained that Clisp is slow!  You have complained
> that some part of Clisp are too fast. 

> How could be that you don't see the strangeness of this statement of
> yours?

Oh come *on*, do not be so bloody misleading.  It was completely and
entirely clear that what he was complaining about was the slowness of
user-written code.  Only someone with a very poor grasp of English, or
very stupid, or deliberately trying to start an argument would not see
that.

If you really do not understand, take it from me, he was *not*
complaining about things being too fast.

--tim
From: Tim Bradshaw
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <nkj3dyas7pc.fsf@tfeb.org>
Gareth McCaughan <················@pobox.com> writes:

> Bruno Haible wrote:
> 
> [all snipped, in fact]
> 
> Why is it that when someone says "CLISP is great, but it's
> rather slow for many things" people jump up ans say "That's
> unfair! CLISP is great!" ?
> 
> CLISP is a lovely system. It's just a pity it does many things
> so slowly. (I am aware that many of its benefits are consequences
> of the same decisions that lead also to its slowness.)
> 

I think I should say that I agree with this, since I've posted a
couple of nasty articles today on the non-clisp side of this debate.

I think clisp is a really good system.  I think it has some defects,
its slightly odd performance characteristics being one, but I think
the other Lisps have defects too.

--tim
From: Robert Monfera
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3798D2C0.FA609D5D@fisec.com>
Gareth McCaughan wrote:
...
> > Should be easy to fix.  Just insert a delay in the interpreter loop
> > whenever a built-in function is called.  You may even make this delay
> > so big as to make build-it-yourself functions more convenient, thus
> > encouraging constructing abstractions.
> 
> A brilliant idea. I'll do it at once.
...

Be careful though - you should make built-in primitive functions slower
if you do not use them in user constructed primitives - otherwise they
would be slower too.

Maybe one could introduce a declaration to somehow distinguish between
built-primitives and primitives built by the programmer.

Robert
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3141910604950733@naggum.no>
* ········@cli.di.unipi.it (Pierpaolo Bernardi)
| Allegro doesn't grok them [inline functions] either. 

  Allegro CL inlines system functions, but not user-defined functions.
  various measures can be used to obtain the speed effect without the code
  bloat effect.

| I don't understand this.  You are complaining that built-in fuctions
| are too fast?

  it's very valid concern with CLISP because it means that any attempt to
  make use of the powerful abstractions that Common Lisp offers will cost
  you a tremendous lot in performance.  the code that people write in CLISP
  looks like Lisp Assembler -- they go to great lengths to use built-in
  functions for speed.

| Should be easy to fix.  Just insert a delay in the interpreter loop
| whenever a built-in function is called.  You may even make this delay
| so big as to make build-it-yourself functions more convenient, thus
| encouraging constructing abstractions.

  I take it that you mean that encouraging abstraction is bad.  if so,
  I concede that CLISP offers you the best choice, bar none.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Bruno Haible
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7nimg9$dt9$1@news.u-bordeaux.fr>
Erik Naggum <····@naggum.no> wrote:
>
>  it's very valid concern with CLISP because it means that any attempt to
>  make use of the powerful abstractions that Common Lisp offers will cost
>  you a tremendous lot in performance.  the code that people write in CLISP
>  looks like Lisp Assembler -- they go to great lengths to use built-in
>  functions for speed.

Maybe. On the other hand, I've seen that people write code which avoids
Common Lisp built-in data types, or even simulate these data types:

  - Garnet uses its own kind of self-made hash tables. Are the vendors'
    hash tables too slow, or do they have an unusable hash function?
    CLISP at least has fast hash tables, and gets the hash function right.

  - Gilbert Baumann, when writing a universal lexer/parser, stopped using
    bit-vectors, because in some implementation, compiling to native code,
    bit-vectors were unbearably slow.
    CLISP at least has fast bit-vectors.

Providing a native code compiler with all bells and whistles is respectable,
but it is not an excuse for badly implementing Common Lisp's datatypes.

                Bruno                            http://clisp.cons.org/
From: Nick Levine
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <7nk7c5$97n$1@barcode.tesco.net>
>  - Garnet uses its own kind of self-made hash tables. Are the vendors'
>    hash tables too slow, or do they have an unusable hash function?

I assume you are talking about Garnet's KR? That also served as a home-baked
substitute for CLOS. Its initial justification was that when Garnet was
first written, CL was in its infancy and most implementations did not at the
time have CLOS.

I always found KR to be unwieldy, undebuggable (except with an almighty
struggle), and slower than CLOS would have been. [Just my opinion though.]

The advantages of using the implementor's CLOS, hash-tables, etc are that
they will be more reliable, supported, faster, and comprehensible to some
stranger who has to fix your code on your behalf five years from now.

- n
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933329062.888703@fire-int>
Bruno Haible (······@clisp.cons.org) wrote:

:   - Gilbert Baumann, when writing a universal lexer/parser, stopped using
:     bit-vectors, because in some implementation, compiling to native code,
:     bit-vectors were unbearably slow.
:     CLISP at least has fast bit-vectors.

This does not match my experience.  Here's some simple tests. The
ERATOSTENE function is taken from a program of mine which could not
work on Allegro because of RANDOM reasons. 

All tests run on a 486/66 Linux box.

Allegro 5.0.1:
--------------

WC(17): (time (prog1 t (eratostene 1000000)))
; cpu time (non-gc) 3,210 msec user, 10 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  3,210 msec user, 10 msec system
; real time  3,890 msec
; space allocation:
;  2 cons cells, 0 symbols, 125,016 other bytes, 0 static bytes
T

WC(22): (time (prog1 t (bitest 100000)))
; cpu time (non-gc) 730 msec user, 0 msec system
; cpu time (gc)     0 msec user, 0 msec system
; cpu time (total)  730 msec user, 0 msec system
; real time  789 msec
; space allocation:
;  2 cons cells, 0 symbols, 12,512 other bytes, 0 static bytes
T

USER(40): (time (prog1 t (bitest-2 1000000)))
; cpu time (non-gc) 18,290 msec user, 10 msec system
; cpu time (gc)     540 msec user, 0 msec system
; cpu time (total)  18,830 msec user, 10 msec system
; real time  20,805 msec
; space allocation:
;  3 cons cells, 0 symbols, 125,048 other bytes, 0 static bytes
T

----------------------------------------------------------------

LispWorks 4.1.16 Beta:
----------------------

CL-USER 17 > (time (prog1 t (eratostene 1000000)))

9.8 seconds used.
Standard Allocation 127016 bytes.
Fixlen Allocation 1364 bytes.
T

CL-USER 22 >  (time (prog1 t (bitest 100000)))

1.6 seconds used.
Standard Allocation 13696 bytes.
Fixlen Allocation 352 bytes.
T

CL-USER 23 >  (time (prog1 t (bitest-2 100000)))

3.2 seconds used.
Standard Allocation 14168 bytes.
Fixlen Allocation 319 bytes.
T

[Note: LispWorks times are realtimes. Sigh.]

----------------------------------------------------------------

Clisp 1999-01-08 (January 1999)
-------------------------------

[4]> (time (prog1 t (eratostene 1000000)))

Real time: 74.52912 sec.
Run time: 67.33 sec.
Space: 125008 Bytes
T

[7]> (time (prog1 t (bitest 100000)))

Real time: 3.711812 sec.
Run time: 3.26 sec.
Space: 12508 Bytes
T

[8]> (time (prog1 t (bitest-2 100000)))

Real time: 4.390969 sec.
Run time: 3.97 sec.
Space: 12532 Bytes
T

----------------------------------------------------------------

Poplog v15.53
-------------

== (time (prog1 t (eratostene 1000000)))
CPU TIME: 187.00 seconds
T

== (time (prog1 t (bitest 100000)))
CPU TIME: 15.47 seconds
T

== (time (prog1 t (bitest-2 100000)))
CPU TIME: 17.66 seconds
T

----------------------------------------------------------------

Where ERATOSTENE, BITEST and BITEST-2 are defined as follows:

(defun eratostene (max)
  (declare (optimize (speed 3) (safety 0) (space 0) (debug 0)))
  (check-type max fixnum)
  (locally (declare (fixnum max))
    (let ((v (make-array max :element-type 'bit :initial-element 1)))
      (declare (type simple-bit-vector v))
      (setf (sbit v 0) 0)
      (setf (sbit v 1) 0)
      (let ((max/2 (truncate max 2)))
        (loop for primo of-type fixnum = 2 then (+ primo 1)
            while (< primo max/2)
            when (= 1 (sbit v primo))
            do (loop for i of-type fixnum = (+ primo primo) then (+ i primo)
                   while (< i max)
                   do (setf (sbit v i) 0))))
      v)))

(defun bitest (n)
  (declare (optimize (speed 3) (safety 0) (space 0) (debug 0)))
  (check-type n fixnum)
  (locally (declare (fixnum n))
    (let ((v (make-array n :element-type 'bit :initial-element 1)))
      (declare (type simple-bit-vector v))
      (loop for i of-type fixnum across v do (setf (sbit v i) 0))
      (foo v)
      (loop for i of-type fixnum across v do (setf (sbit v i) 1))
      v)))

(defun foo (v)
  (declare (ignore v))
  t)

(defun bitest-2 (n)
  (declare (optimize (speed 3) (safety 0) (space 0) (debug 0)))
  (check-type n fixnum)
  (locally (declare (fixnum n))
    (let ((v (make-array n
                         :element-type 'bit
                         :initial-element 1
                         :adjustable t)))
      (declare (type bit-vector v))
      (loop for i of-type fixnum across v do (setf (bit v i) 0))
      (foo v)
      (loop for i of-type fixnum across v do (setf (bit v i) 1))
      v)))
      
----------------------------------------------------------------

Summary:

               ACL   LW  Clisp   PopCL

ERATOSTENE    3.20  9.8  74.52  187.00
BITEST         .73  1.6   3.26   15.47
BITEST-2     18.83  3.2   3.97   17.66

These simple tests tentatively indicate that native code compilers
usually outperforms byte-coded ones on bitvectors (duh). With the
notable exception that Allegro has a huge performance fall for
non-simple bit-vectors.

Now, according to the McGrath-Naggum Unified Theory of Programming, I
must conclude that Allegro users when dealing with bitvectors do write
a kind of lisp assembly code, in order to avoid using non-simple
bit-vectors.

P.
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <932954161.827750@fire-int>
Erik Naggum (····@naggum.no) wrote:
: * ········@cli.di.unipi.it (Pierpaolo Bernardi)
: | Allegro doesn't grok them [inline functions] either. 

:   Allegro CL inlines system functions, but not user-defined functions.
:   various measures can be used to obtain the speed effect without the code
:   bloat effect.

I know this.  I thought that the original comment said that the
compiler didn't obeyed inline declarations.  Maybe I have misread.

: | I don't understand this.  You are complaining that built-in fuctions
: | are too fast?

:   it's very valid concern with CLISP because it means that any attempt to
:   make use of the powerful abstractions that Common Lisp offers will cost
:   you a tremendous lot in performance. 

If a programmer writes bad code, is that programmer's problem.  He
should not blame the lisp implementation he's using.

:   the code that people write in CLISP
:   looks like Lisp Assembler -- they go to great lengths to use built-in
:   functions for speed.

I have never noticed this. And certainly is not true for code that I
write.  Can you point me to any publically available example?  

What code does exist, outside of the Clisp implementation, that is
optimized for Clisp?  I don't know of any.

And, IMO, if this turns out to be the case, a likely explanation could
be that a beginner is more likely to be using Clisp than a native code
compiler, for the obvious price reasons.

: | Should be easy to fix.  Just insert a delay in the interpreter loop
: | whenever a built-in function is called.  You may even make this delay
: | so big as to make build-it-yourself functions more convenient, thus
: | encouraging constructing abstractions.

:   I take it that you mean that encouraging abstraction is bad.  

You are wrong. I don't mean this, and I can't see how you can conclude
that I mean this from what I have written.

P.
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87so6bhfe8.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

[rest snipped]

> And, IMO, if this turns out to be the case, a likely explanation could
> be that a beginner is more likely to be using Clisp than a native code
> compiler, for the obvious price reasons.

What _price_ reasons are there that keep a _beginner_ from using
either CMU CL or one of the free versions of Allegro CL or Harlequin's 
LispWorks?

CLISP might have been ported more widely than most other
implementations, thus being more available, but I don't see a price
reason for this...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <863dybqcdb.fsf@g.local>
Pierpaolo Bernardi wrote:

[#\Erik:]
>:   it's very valid concern with CLISP because it means that any attempt to
>:   make use of the powerful abstractions that Common Lisp offers will cost
>:   you a tremendous lot in performance. 
> 
> If a programmer writes bad code, is that programmer's problem.  He
> should not blame the lisp implementation he's using.

Really?

Let's take a more extreme example. Suppose you have a Lisp compiler
that screws up whenever you try to do simple CLOS things: it gives
wrong answers, or goes into an infinite loop, or something. If you
are (for whatever reason) using this implementation, and you avoid
using the features that produce these terrible results, does that
make you a bad programmer? Is it your problem rather than that of
the implementation?

Now, suppose that instead of actually going into an *infinite* loop,
the system just behaves really appallingly slowly when using those
features: say, a factor of 10^6 slower tham it "ought" to be. If you
avoid using features that lead to a catastrophic slowdown, is that
bad practice? Should you be blamed for writing bad programs, not the
implementation for making it harder to write good ones?

What if it's a factor of 10^5? 10^4? 10^3? 10^2? At this point we're
right in the CLISP ball-park, I think.

Is it bad practice for a programmer to write his code so that it
doesn't go unbearably slowly on his system?

>:   the code that people write in CLISP
>:   looks like Lisp Assembler -- they go to great lengths to use built-in
>:   functions for speed.
> 
> I have never noticed this. And certainly is not true for code that I
> write.  Can you point me to any publically available example?  
> 
> What code does exist, outside of the Clisp implementation, that is
> optimized for Clisp?  I don't know of any.

I have written code that's sort-of optimised for CLISP. More
precisely, I've done whatever I had to to get performance good
enough for my purposes on CLISP. The resulting code doesn't look
like "Lisp Assembler", but that may just indicate that I don't
know much about optimising code for CLISP or that I care about
things other than performance too.

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933042776.202871@fire-int>
Gareth McCaughan (················@pobox.com) wrote:
: Pierpaolo Bernardi wrote:
...
: > If a programmer writes bad code, is that programmer's problem.  He
: > should not blame the lisp implementation he's using.

: Really?

Yes.

: Let's take a more extreme example. Suppose you have a Lisp compiler
: that screws up whenever you try to do simple CLOS things: it gives
: wrong answers, or goes into an infinite loop, or something. 

You cannot conflate bugs with slowness.

: If you
: are (for whatever reason) using this implementation, and you avoid
: using the features that produce these terrible results, does that
: make you a bad programmer? Is it your problem rather than that of
: the implementation?

Definitely.

: Now, suppose that instead of actually going into an *infinite* loop,
: the system just behaves really appallingly slowly when using those
: features: say, a factor of 10^6 slower tham it "ought" to be. If you
: avoid using features that lead to a catastrophic slowdown, is that
: bad practice? Should you be blamed for writing bad programs, not the
: implementation for making it harder to write good ones?

: What if it's a factor of 10^5? 10^4? 10^3? 10^2? At this point we're
: right in the CLISP ball-park, I think.

: Is it bad practice for a programmer to write his code so that it
: doesn't go unbearably slowly on his system?

Yes. 

: >:   the code that people write in CLISP
: >:   looks like Lisp Assembler -- they go to great lengths to use built-in
: >:   functions for speed.
: > 
: > I have never noticed this. And certainly is not true for code that I
: > write.  Can you point me to any publically available example?  
: > 
: > What code does exist, outside of the Clisp implementation, that is
: > optimized for Clisp?  I don't know of any.

: I have written code that's sort-of optimised for CLISP. More
: precisely, I've done whatever I had to to get performance good
: enough for my purposes on CLISP.

That's interesting.  For what reason you have not used one of the
compilers with better performance?

P.
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <864sipwxez.fsf@g.local>
Pierpaolo Bernardi wrote:

>: Let's take a more extreme example. Suppose you have a Lisp compiler
>: that screws up whenever you try to do simple CLOS things: it gives
>: wrong answers, or goes into an infinite loop, or something. 
> 
> You cannot conflate bugs with slowness.

Why not? If a Lisp system took a million cycles to do every operation
then it would be just as unusable as if it returned 999 for (CAR NIL)
every now and then.

>: If you
>: are (for whatever reason) using this implementation, and you avoid
>: using the features that produce these terrible results, does that
>: make you a bad programmer? Is it your problem rather than that of
>: the implementation?
> 
> Definitely.

Why?

>: Is it bad practice for a programmer to write his code so that it
>: doesn't go unbearably slowly on his system?
> 
> Yes. 

Why?

(If you mean "because his code is for others to run too", let me
amend my question to "... on the systems on which it will be running?".)

>: I have written code that's sort-of optimised for CLISP. More
>: precisely, I've done whatever I had to to get performance good
>: enough for my purposes on CLISP.
> 
> That's interesting.  For what reason you have not used one of the
> compilers with better performance?

The machine I run CLISP on doesn't have any other Common Lisps
that can run on it. I also have an x86 unix box; I run CMU CL
on that. (Though if I wanted to do bignum-intensive stuff, or
replace bash with Lisp, or use arbitrary-precision reals, I would
use CLISP on that machine too.)

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933330750.852087@fire-int>
Gareth McCaughan (················@pobox.com) wrote:
: Pierpaolo Bernardi wrote:

: >: Let's take a more extreme example. Suppose you have a Lisp compiler
: >: that screws up whenever you try to do simple CLOS things: it gives
: >: wrong answers, or goes into an infinite loop, or something. 
: > 
: > You cannot conflate bugs with slowness.

: Why not? If a Lisp system took a million cycles to do every operation
: then it would be just as unusable as if it returned 999 for (CAR NIL)
: every now and then.

Agreed.  But your example is unrealistic.

: >: If you
: >: are (for whatever reason) using this implementation, and you avoid
: >: using the features that produce these terrible results, does that
: >: make you a bad programmer? Is it your problem rather than that of
: >: the implementation?
: > 
: > Definitely.

: Why?

Because only a fool would use a compiler 'that screws up whenever you
try to do simple CLOS things: it gives wrong answers, or goes into an
infinite loop, or something'.


: >: Is it bad practice for a programmer to write his code so that it
: >: doesn't go unbearably slowly on his system?
: > 
: > Yes. 

: Why?

Because ruining the code you write, for adapting it to the quirks of a
particular compiler, does not make any economic sense.

If your code, written in the style that satisfies you, is too slow if
compiled with Clisp, then you should use a different compiler, or buy
a faster machine, or complain to Bruno (sometimes this works).

P.
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <86d7x98yqr.fsf@g.local>
Pierpaolo Bernardi wrote:

>:>: Let's take a more extreme example. Suppose you have a Lisp compiler
>:>: that screws up whenever you try to do simple CLOS things: it gives
>:>: wrong answers, or goes into an infinite loop, or something. 
>:> 
>:> You cannot conflate bugs with slowness.
> 
>: Why not? If a Lisp system took a million cycles to do every operation
>: then it would be just as unusable as if it returned 999 for (CAR NIL)
>: every now and then.
> 
> Agreed.  But your example is unrealistic.

Of course it is. But there's a continuum between 1000000 cycles and
1 cycle (well, not actually a continuum!). At what point does it
become permissible to "conflate bugs with slowness"?

>:>: If you
>:>: are (for whatever reason) using this implementation, and you avoid
>:>: using the features that produce these terrible results, does that
>:>: make you a bad programmer? Is it your problem rather than that of
>:>: the implementation?
>:> 
>:> Definitely.
> 
>: Why?
> 
> Because only a fool would use a compiler 'that screws up whenever you
> try to do simple CLOS things: it gives wrong answers, or goes into an
> infinite loop, or something'.

Really? What if that compiler implemented other aspects of Common Lisp
correctly and efficiently, and was the only one available on the platform
this person needed to use, and if he or she felt that CL-without-CLOS
was still a better language than any of the alternatives?

It seems like you're saying: if a compiler is broken in some respect,
you must not use it even if it's the best tool available. I think
this is crazy. It's especially crazy given that the CL compiler you
like best is known to have a significant non-conformance in its CLOS
implementation. :-)

>:>: Is it bad practice for a programmer to write his code so that it
>:>: doesn't go unbearably slowly on his system?
>:> 
>:> Yes. 
> 
>: Why?
> 
> Because ruining the code you write, for adapting it to the quirks of a
> particular compiler, does not make any economic sense.

Who said "ruining" other than you?

And why does it not make economic sense? What if this person is
programming for fun rather than for profit? What if there's only
one compiler available for the machine that has to be used, and
no realistic prospect of others appearing?

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <879082w3og.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> You cannot conflate bugs with slowness.

There are such things as performance bugs, IMHO.  Take for example CMU
CL (lest I be accused of being unfair to CLISP again).  The generalized
sequence mapping operations have a performance bug:  Since they simply
use elt to step through all sequences, they exhibit O(n^2) behaviour on
lists.  Although the result of any program won't change because of
this, and nothing in ANSI CL demands that the mapping operation be
performed in O(n), I'd still consider this a bug, for two reasons:

a) The unweary user will be quite surprised when mapping operations
   suddenly start operating in O(n^2) time, and

b) once he is aware of this behaviour, he will most likely start
   working around it, which will decrease the quality of his code.

Now this problem isn't as severe as it sounds at first, since O(n^2)
on small lists is still not a biggie, and long lists are often a
sign that you are using the wrong data structure anyway.  But it
still is a severe defect in CMU CL, and I'd like it fixed some time,
the sooner the better.  Sadly I haven't had the time to polish up my
implementation of mapping to a reasonable standard yet, and thus the
problem still persists...

But denying that this is a problem of the implementation, and shifting 
the blame onto the user seems like the easy (and wrong) way out to me.

> : If you
> : are (for whatever reason) using this implementation, and you avoid
> : using the features that produce these terrible results, does that
> : make you a bad programmer? Is it your problem rather than that of
> : the implementation?
> 
> Definitely.

It is definitely the problem of the implementation, if the user has
brought his performance problems to the attention of the implementor,
and the implementor has through lack of action "forced" the user to
use work-arounds.  Now if the in-action has good reasons (like
differences in implementation goals, or lack of resources/time, or
whatever), then this is not really the fault of the implementor
personally, but this doesn't shift the blame to the user, unless the
user clings to the implementation he is using without proper reasons.

> : Is it bad practice for a programmer to write his code so that it
> : doesn't go unbearably slowly on his system?
> 
> Yes.

Simplistic answers don't provide any insight into the situation.

Regs, Pierre

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933331200.672627@fire-int>
Pierre R. Mai (····@acm.org) wrote:
: ········@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > You cannot conflate bugs with slowness.

: There are such things as performance bugs, IMHO.  Take for example CMU
: CL (lest I be accused of being unfair to CLISP again).  The generalized
: sequence mapping operations have a performance bug:  Since they simply
: use elt to step through all sequences, they exhibit O(n^2) behaviour on
: lists.  Although the result of any program won't change because of
: this, and nothing in ANSI CL demands that the mapping operation be
: performed in O(n), I'd still consider this a bug, for two reasons:

I would consider this a bug too, and if I were using CMUCL I would
have tried to fix it.

: a) The unweary user will be quite surprised when mapping operations
:    suddenly start operating in O(n^2) time, and

: b) once he is aware of this behaviour, he will most likely start
:    working around it, which will decrease the quality of his code.

I wouldn't allow to program my toaster to a programmer like this.  

The correct action should be to fix the bug, in the case of CMUCL, or
complain to the vendor, in the case of a commercial product, or to
switch compiler, in case the first two alternatives are not practical.

: Now this problem isn't as severe as it sounds at first, since O(n^2)
: on small lists is still not a biggie, and long lists are often a
: sign that you are using the wrong data structure anyway.  But it
: still is a severe defect in CMU CL, and I'd like it fixed some time,
: the sooner the better.  Sadly I haven't had the time to polish up my
: implementation of mapping to a reasonable standard yet, and thus the
: problem still persists...

: But denying that this is a problem of the implementation, and shifting 
: the blame onto the user seems like the easy (and wrong) way out to me.

The problem *is* with the user.  He should fix the tool, or pay to get
the tool fixed for him.  

: > : If you
: > : are (for whatever reason) using this implementation, and you avoid
: > : using the features that produce these terrible results, does that
: > : make you a bad programmer? Is it your problem rather than that of
: > : the implementation?
: > 
: > Definitely.

: It is definitely the problem of the implementation, if the user has
: brought his performance problems to the attention of the implementor,
: and the implementor has through lack of action "forced" the user to
: use work-arounds.

Since when people are forced to use a particular compiler?

: Simplistic answers don't provide any insight into the situation.

I hope that now is clearer what I wanted to say.

P.
From: Gareth McCaughan
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <86g1258z3m.fsf@g.local>
Pierpaolo Bernardi wrote:

>: It is definitely the problem of the implementation, if the user has
>: brought his performance problems to the attention of the implementor,
>: and the implementor has through lack of action "forced" the user to
>: use work-arounds.
> 
> Since when people are forced to use a particular compiler?

I am forced to use CLISP on my Acorn RiscPC because there is
no other Common Lisp system that runs on it. I am not in a
position to "fix" the main problem with it -- namely, that
it's slow -- because I don't have the time to write a native-code
CL compiler. There is no point in complaining to the "vendor"
because (half the point of CLISP being its smallness and its
portability) there's (quite rightly) no chance of that resulting
in a native-code compiler appearing. "Complain" is the wrong
word, anyway: I don't *object* to CLISP not having a native-code
compiler, I'd just like it more if it had one.

So, suppose I want to do something on that machine in Common Lisp;
suppose I can only get performance sufficient for my needs by
writing slightly nastier code than I would otherwise like.

According to your theories, this is an unacceptable option. So
should I

  - port CMU CL to an architecture completely different from
    any it currently runs on?

  - pay Franz or Harlequin a few hundreds of thousands of dollars
    to port their products to a platform hardly anyone uses?

  - write my own native-code compiler and plug it into CLISP?

  - Forget about Lisp and write in C or C++?

  - Sell the machine (which gets plenty of use for things that
    don't run on any other platform) and buy something entirely
    different?

It seems to me that all those are *obviously* worse solutions than
working around those respects in which CLISP is suboptimal for my
purposes.

Now, as it happens, I do also have another machine that runs CMU CL.
When I need Lisp code to run fast, I use that. So it's not a big
deal for me. But suppose I didn't have that other machine. Are you
saying that unless I prefer buying a new computer to tailoring my
code to what I have, I'm being an irresponsible programmer?

                               *

Of course, I use that machine only for fun. If I were being paid to
develop software in Lisp, perhaps the above arguments would be bogus.
But suppose I were being paid to develop software in Lisp for some
architecture not supported by any of the other implementations; some
embedded thing, perhaps. What then?

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Christopher B. Browne
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <slrn7pnnjf.1p0.cbbrowne@knuth.brownes.org>
On 26 Jul 1999 01:55:23 GMT, Pierpaolo Bernardi <········@cli.di.unipi.it> posted:
>Erik Naggum (····@naggum.no) wrote:
>: | Should be easy to fix.  Just insert a delay in the interpreter loop
>: | whenever a built-in function is called.  You may even make this delay
>: | so big as to make build-it-yourself functions more convenient, thus
>: | encouraging constructing abstractions.
>
>:   I take it that you mean that encouraging abstraction is bad.  
>
>You are wrong. I don't mean this, and I can't see how you can conclude
>that I mean this from what I have written.

It's reasonable to expect that if "native operators" work more efficiently
than "generated ones," this will encourage developers to prefer using
"native" ones.

That being said, there are two confounding effects:
a) Constructing your own operators using macros provides a direct
translation of "generated operators" into "native" ones, which mitigates
the problem.

b) Constructing your own "language" supplies a "cost of comprehension."
Is it preferable for a new developer to:
  1: Learn "regular, colloquial" Lisp, or
  2: Learn your variations on Lisp, namely the language that is
     "Lisp Plus Some K001 operators we made up."

I'd tend to think it preferable to go to "tried and true" traditional
Lisp, as that is used by a much wider community.
-- 
"If you were plowing a field, which would you rather use?  Two strong oxen
 or 1024 chickens?"
-- Seymour Cray
········@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>
From: Joerg-Cyril Hoehle
Subject: abstraction, OO and macros (Was: CMU CL vs. CLISP?)
Date: 
Message-ID: <qkpemhucetj.fsf_-_@tzd.dont.telekom.spam.de.me>
Hello,

I'm trying to start a discussion about abstraction, optimization,
macros and also OO.

I've observed a shift about the former bad opinion about macros in the
last years.  Maybe Peter Norvig started with PAIP, and Paul Graham's
On Lisp made the point extremely clear.


········@news.brownes.org (Christopher B. Browne) writes:
> It's reasonable to expect that if "native operators" work more efficiently
> than "generated ones," this will encourage developers to prefer using
> "native" ones.
> 
> That being said, there are two confounding effects:
> a) Constructing your own operators using macros provides a direct
> translation of "generated operators" into "native" ones, which mitigates
> the problem.

That's old style.  All newer Lisp books advocate against macros for
that purpose and heavily recommend DECLARE INLINE.  Yet long-
experienced Lispers still conclude "macros is the only way to optimize
portably".

We are really discussing compiler qualities at this point and IMHO
this is independent on other choices (bytecode or native code, CMUCL
or CLISP or ACL) etc.

> b) Constructing your own "language" supplies a "cost of comprehension."
> Is it preferable for a new developer to:
What do you expect from new developers in any language?  "Here's the C
language and here are gadzillions of various libraries."

>   1: Learn "regular, colloquial" Lisp, or
>   2: Learn your variations on Lisp, namely the language that is
>      "Lisp Plus Some K001 operators we made up."
That's again old yet long-standing advice against macros.

Newer books have advocated the use of domain specific languages (DSL)
and related concepts that fit nicely into macros, like all kind of
declarative stuff (your a).  I think this puts the power of macros right.

> I'd tend to think it preferable to go to "tried and true" traditional
> Lisp, as that is used by a much wider community.
It depends.  For example, I refrained from implementing a COUNTDOWN
macro as the reverse of DOTIMES.

Do you want simple, readable code (for highly reliable systems)?  Do
you want high-performance code?  Do you want code fast to write?  It
all depends.

Do you trust more concepts expressed w.r.t. to a given domain,
compiled into Lisp code (compiled into whatever) using techniques of
partial evaluation, compilation et al, that you must then trust and
verify as well, or do you trust more code written in basic Lisp?  Your
job's requirements will probably bias your answer.

Some safety requirements argue against the use of preprocessors.
Macros may be percieved as very similar in effect.


On the other hand, I'm sometimes missing OO features within CL
primitives.  That's IMHO what kills a real decision between "abstract"
operations and CL primitives.  My favourite here is some bag or set
type, with different operations allowed whenever I violate abstraction
and take advantage of the underlying BIT-VECTOR, LIST or HASH-TABLE
structure.  But maybe I'm just asking for good type analysis (not
necessarily a static type system): I don't want to rewrite the code
using the sets when I change the representation, yet it should be as
fast as the primitive operations that it maps onto (suffer no extra
FUNCALL that just does CAR, etc).

Regards,
	Jorg Hohle
Telekom Research Center -- SW-Reliability
From: Chuck Fry
Subject: Re: abstraction, OO and macros (Was: CMU CL vs. CLISP?)
Date: 
Message-ID: <7nkl4l$lma$1@shell5.ba.best.com>
In article <··················@tzd.dont.telekom.spam.de.me>,
Joerg-Cyril Hoehle <············@tzd.dont.telekom.spam.de.me> wrote:
>········@news.brownes.org (Christopher B. Browne) writes:
>> It's reasonable to expect that if "native operators" work more efficiently
>> than "generated ones," this will encourage developers to prefer using
>> "native" ones.
>> 
>> That being said, there are two confounding effects:
>> a) Constructing your own operators using macros provides a direct
>> translation of "generated operators" into "native" ones, which mitigates
>> the problem.
>
>That's old style.  All newer Lisp books advocate against macros for
>that purpose and heavily recommend DECLARE INLINE.  Yet long-
>experienced Lispers still conclude "macros is the only way to optimize
>portably".

That's because the INLINE declaration in user code is ignored by at
least one very popular commercial CL implementation.  So one winds up
having to use macrology instead.

>Do you want simple, readable code (for highly reliable systems)?  Do
>you want high-performance code?  Do you want code fast to write?  It
>all depends.

I want it *all*!  I don't see that these goals have to conflict.
Granted, tuning code for performance takes time, but it's
straightforward to write a prototype using appropriate abstractions,
then later tune the abstractions according to observed performance.

 -- Chuck
-- 
	    Chuck Fry -- Jack of all trades, master of none
 ······@chucko.com (text only please)  ········@home.com (MIME enabled)
Lisp bigot, mountain biker, car nut, sometime guitarist and photographer
The addresses above are real.  All spammers will be reported to their ISPs.
From: Pierpaolo Bernardi
Subject: Re: abstraction, OO and macros (Was: CMU CL vs. CLISP?)
Date: 
Message-ID: <933331812.289478@fire-int>
Joerg-Cyril Hoehle (············@tzd.dont.telekom.spam.de.me) wrote:
: Hello,

: I'm trying to start a discussion about abstraction, optimization,
: macros and also OO.

: I've observed a shift about the former bad opinion about macros in the
: last years.  Maybe Peter Norvig started with PAIP, and Paul Graham's
: On Lisp made the point extremely clear.


: ········@news.brownes.org (Christopher B. Browne) writes:
: > It's reasonable to expect that if "native operators" work more efficiently
: > than "generated ones," this will encourage developers to prefer using
: > "native" ones.
: > 
: > That being said, there are two confounding effects:
: > a) Constructing your own operators using macros provides a direct
: > translation of "generated operators" into "native" ones, which mitigates
: > the problem.

: That's old style.  All newer Lisp books advocate against macros for
: that purpose and heavily recommend DECLARE INLINE.  

But Allegro does not grok DECLARE INLINE!  So, this cannot be a useful
tecnique.

I wonder in what abominable lisp assembler style are written programs
written by allegro users because of this limitation of their
compiler.

(Sorry, I have nothing against the Allegro compiler. Just against some
of its most obnoxious users).

P.
From: Erik Naggum
Subject: Re: abstraction, OO and macros (Was: CMU CL vs. CLISP?)
Date: 
Message-ID: <3142323454039562@naggum.no>
* Pierpaolo Bernardi
| I wonder in what abominable lisp assembler style are written programs
| written by allegro users because of this limitation of their compiler.

  call it whatever you like, but it is indistinguisable from well-written
  Common Lisp.  you see, some Allegro CL users have discovered compiler
  macros and truly wonder what the hell this inline business is all about.

| (Sorry, I have nothing against the Allegro compiler.  Just against some
| of its most obnoxious users).

  yeah, it's really irritating to have people show you a better way to do
  something which blows all your arguments to bits, but you, too, can use
  compiler macros, and the need for inlining just vanishes.  of course, you
  can still complain that Allegro CL ignores (DECLARE INLINE), but I don't
  think it'll have the same intensity once you know the better way to do it.

  when Franz Inc has finished their ongoing work on environments, which was
  lost from CLtL2 because it hadn't been done right at the time, you can
  write your code in two stages: (1) the clean, neat expression of intent,
  and (2) the dirty details in compiler macros.  that's how I prefer to do
  things already, instead of littering my code with declarations, so this
  will just get better and better.  inlining probably won't be able to use
  environment information at all.

  (and with that, I'm gone for a good ten days, again.  see you at IJCAI.)

#:Erik, probably obnoxious
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933038252.554404@fire-int>
Christopher B. Browne (········@news.brownes.org) wrote:
: On 26 Jul 1999 01:55:23 GMT, Pierpaolo Bernardi <········@cli.di.unipi.it> posted:

: It's reasonable to expect that if "native operators" work more efficiently
: than "generated ones," this will encourage developers to prefer using
: "native" ones.

Maybe it is reasonable for a perl hacker.  

A professional programmer should be concerned about correctness, and
maintainability of his code.

If his code to be correct needs to run faster than what Clisp can
provide, he should not be using Clisp.

: That being said, there are two confounding effects:

The rest of your post is right, but not relevant.

P.
From: Erik Naggum
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3141969146012605@naggum.no>
* ········@cli.di.unipi.it (Pierpaolo Bernardi)
| I know this.  I thought that the original comment said that the compiler
| didn't obeyed inline declarations.  Maybe I have misread.

  that may be what he meant, but he said "doesn't grok inline functions".
  since it is easy to misunderstand this (watch what people have taken
  pretty clear statements to mean in here recently) to mean that Allegro CL
  doesn't inline systems functions, either, I thought it was worth pointing
  out.  as a side note, NOTINLINE declarations are of course honored.

| If a programmer writes bad code, is that programmer's problem.  He should
| not blame the lisp implementation he's using.

  sigh.  the exact same argument can be used about programming languages.
  it seems you go out of your way to refuse to understand the issue in
  favor of defending CLISP, so I give up, but will just make a mental note
  that CLISP _still_ needs defending by people who refuse to listen to
  criticism, like it has in the past.

| You are wrong. I don't mean this, and I can't see how you can conclude
| that I mean this from what I have written.

  it's a pretty obvious conclusion from your silly refusal to understand
  the criticism and crack jokes about a serious concern.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?
From: Pierpaolo Bernardi
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <933040296.950555@fire-int>
Erik Naggum (····@naggum.no) wrote:
: * ········@cli.di.unipi.it (Pierpaolo Bernardi)

:   it seems you go out of your way to refuse to understand the issue in
:   favor of defending CLISP, so I give up, but will just make a mental note
:   that CLISP _still_ needs defending by people who refuse to listen to
:   criticism, like it has in the past.

I just reacted to a puzzling affermation and corrected a bit of false
information.

: | You are wrong. I don't mean this, and I can't see how you can conclude
: | that I mean this from what I have written.

:   it's a pretty obvious conclusion from your silly refusal to understand
:   the criticism and crack jokes about a serious concern.

I promise I will never joke again about Clisp's builtins being too fast.

P.
From: William Deakin
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3796DF5B.92B330A4@pindar.com>
As a aside, I am running clisp but would like to get a hold of a copy of
CMU CL and have heard the debian CMU CL highly recommended. I am an out
of hours slackware-y and have not been able to track down the debian
package. This has been exacerbated by the search engine at debian.org
not working :-( Could anybody help me with in my quest?

:-) Will
From: Friedrich Dominicus
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <3796E989.4E7D2BCE@inka.de>
William Deakin wrote:
> 
> As a aside, I am running clisp but would like to get a hold of a copy of
> CMU CL and have heard the debian CMU CL highly recommended. I am an out
> of hours slackware-y and have not been able to track down the debian
> package. This has been exacerbated by the search engine at debian.org
> not working :-( Could anybody help me with in my quest?


You better ask this question in some debian mailing-list. I guess you
have to provide some infra-structurre to get the *deb files up and
running. at least you have to have the dpkg kit. 

Regards
Friedrich
From: Pierre R. Mai
Subject: Re: CMU CL vs. CLISP?
Date: 
Message-ID: <87zp0o53b4.fsf@orion.dent.isdn.cs.tu-berlin.de>
Friedrich Dominicus <···················@inka.de> writes:

> William Deakin wrote:
> > 
> > As a aside, I am running clisp but would like to get a hold of a copy of
> > CMU CL and have heard the debian CMU CL highly recommended. I am an out
> > of hours slackware-y and have not been able to track down the debian
> > package. This has been exacerbated by the search engine at debian.org
> > not working :-( Could anybody help me with in my quest?
> 
> 
> You better ask this question in some debian mailing-list. I guess you
> have to provide some infra-structurre to get the *deb files up and
> running. at least you have to have the dpkg kit. 

If I where him, I'd just download the deb package and run alien on it
to get a slackware package.  To find the current CMU CL package (which 
probably needs GLIBC 2.1, though), go to the packages section from the 
Debian home page (link is somewhere on the left), then go to the
unstable packages section (second link from top).  There you'll find
CMU CL packages in the Devel section.  On the info page, press the
download page button, and select the location nearest to you...  Run
alien over the DEB package and enjoy.

If you only have GLIBC 2.0, you might want to get the old stable
version of CMU CL (2.4.9) instead, which is in the Devel section of
the stable packages.

There are also a couple of other cmucl-related packages you might want 
to get (most start with cmucl-).

If you don't have access to alien, you might get by by unpacking the
deb archive yourself (deb's are ar archives, which contain two
tarballs: One with the control information, and one with the files, to 
be unpacked into the root directory).

On Debian, getting CMU CL is just as simple as typing

apt-get install cmucl

on your command line ;)

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]