From: Jasjit Grewal
Subject: Free implementation for Windows 95
Date: 
Message-ID: <66me8i$oop@dailyplanet.wam.umd.edu>
Hello,

I'm a CS student who just had his first taste of LISP this semester and
loved it.  I'm looking for a GOOD FREE implementation of COMMON LISP (as
far as there is such a thing) for Windows 95.  I know there's Allegro CL
lite, but apparently that stops working after a while.  Anybody have any
other suggestions?  I'm probably not doing anything too heavy--mostly AI
programming: Game programming, Logic programming (theorem provers
etc.)--so I don't need anything too big.

Any suggestions anyone has will be appreciated.  

Thanks,
J. Grewal

From: Bruno Haible
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <1722p14184@clisp.cons.org>
Sam Shteingold <···········@cctrading.com> wrote about CLISP:
> It has disadvantages though:
> 1. The author refuses to make it ANSI CL compliant (CLtL1 only).

The "CLtL1 only" is not true. We are moving towards ANSI CL for those
issues

  - for which there is user request (e.g. CHANGE-CLASS and
    DEFINE-METHOD-COMBINATION are not being asked for),

  - which are reasonable (e.g. FORMAT ~/xxx/ is unreasonable because
    it interferes badly with the package system, and the floating
    point contagion rule in ANSI CL goes the wrong way around),

  - which are helpful (e.g. I don't consider it helpful or reasonable
    to specify the pretty-printing algorithm of a CL implementation,
    especially because the XP implementation has the tendency to
    "move horizontally" up to column 50 and then "move vertically"
    as fast as possible, thus producing much more output lines than
    with an algorithm which produces terse output).

> 2. No native compilation (byte code only)

You can also see it as an advantage: Byte code is small. Most native code
Lisps which I run on my box (GCL, CMUCL, ILOG Talk) cause heavy swapping,
especially when there's also an Emacs running. CLISP and Franz' ACL are
much more silent: The ACL developers have payed attention to small code
size.

> This is the only free CL implementation for the MS world.

If you see this as an encouragement to use MS software, I sincerely
apologize for it.

                   Bruno
From: Gareth McCaughan
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <86btyn98iv.fsf@g.pet.cam.ac.uk>
Sam Shteingold wrote:

[CLISP's backwards floating-point contagion:]
> I consider this to be a major bug in CLISP, which, actually, IMHO,
> disqualifies CLISP from ever being called a CL. "CLISP is a CL with many
> advanced features missing and broken floating contagion".

I'm puzzled. A floating-point number is basically a number about
which all that's known is that it lies within some interval or
other, and using a more precise FP type amounts to taking more
space and time to make the interval smaller. The ANSI CL rules
say that when you e.g. add together a bunch of FP values of
differing levels of uncertainty, you should take the *smallest*
of those intervals for the result.

This is certainly standard, but it seems to me that it's also
certainly wrong. I can't think of any case in which it's the
Right Thing.

Certainly CLISP is broken in the sense of violating the standard;
but you evidently mean something more than that. Can you give an
example in which CLISP's behaviour is actually harmful?

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Michael Greenwald
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <michaelg.881875430@Xenon.Stanford.EDU>
Gareth McCaughan <·····@dpmms.cam.ac.uk> writes:

>Certainly CLISP is broken in the sense of violating the standard;
>but you evidently mean something more than that. Can you give an
>example in which CLISP's behaviour is actually harmful?

I missed the beginning of this conversation, and I'm guessing that the
CLISP contagion rule is "when 2 floats are combined, convert to the
least precise format".

Assuming that the comparison contagion rule is "convert to the most
precise (including from floats to rational) format" (else transitivity
doesn't hold), then the following anomaly arises from the arithmetic
combination rule:

(let* ((f1 1000d0)
       (epsilon (coerce single-float-epsilon 'double-float))
       (2epsilon (* 2 single-float-epsilon)))
  (print (< f1 (+ f1 epsilon)))  ;; should be TRUE
  (print (< epsilon 2epsilon))   ;; should be TRUE
  ;; add the two smaller and compare to the sum of the two larger
  (print (< (+ f1 epsilon)
	    (+ (+ f1 epsilon) 2epsilon)));; not smaller anymore!
  ;; but it's even worse!  If my predictions are true, then it's 
  ;; actually >
  (print (> (+ f1 epsilon)
	    (+ (+ f1 epsilon) 2epsilon))))

I don't know if everyone wouldd call this a bug, but I would say it
certainly qualifies as "surprising" and it arises out of choosing the
"convert to the least precise" contagion rule.

(The real answer is that one should be careful about types when
dealing with numbers, and that automatic coercion is something to be
especially/very careful about.)
From: Bruno Haible
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <1753p20361@clisp.cons.org>
Michael Greenwald <········@Xenon.Stanford.EDU> wrote:
>
> Assuming that the comparison contagion rule is "convert to the most
> precise (including from floats to rational) format" (else transitivity
> doesn't hold), then the following anomaly arises from the arithmetic
> combination rule:
>
> (let* ((f1 1000d0)
>        (epsilon (coerce single-float-epsilon 'double-float))
>        (2epsilon (* 2 single-float-epsilon)))
>   (print (< f1 (+ f1 epsilon)))  ;; should be TRUE
>   (print (< epsilon 2epsilon))   ;; should be TRUE
>   ;; add the two smaller and compare to the sum of the two larger
>   (print (< (+ f1 epsilon)
>      	    (+ (+ f1 epsilon) 2epsilon)));; not smaller anymore!
>   ;; but it's even worse!  If my predictions are true, then it's 
>   ;; actually >
>   (print (> (+ f1 epsilon)
> 	    (+ (+ f1 epsilon) 2epsilon))))

Indeed, your sample prints  T T T NIL  in CMUCL but  T T NIL T  in CLISP.
This is surprising, but I tend to think that the problem in this code
are the comparisons. In numerical algorithms with floating-point numbers
I normally don't write  (= x y)  but  (< abs (- x y) epsilon), and similarly
for the other comparisons, and then the problem your code demonstrates goes
away.

                           Bruno
From: Michael Greenwald
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <michaelg.881969149@CS.Stanford.EDU>
······@ilog.fr (Bruno Haible) writes:

>Michael Greenwald <········@Xenon.Stanford.EDU> wrote:
>>
>> Assuming that the comparison contagion rule is "convert to the most
>> precise (including from floats to rational) format" (else transitivity
>> doesn't hold), then the following anomaly arises from the arithmetic
>> combination rule:
>>
>> (let* ((f1 1000d0)
>>        (epsilon (coerce single-float-epsilon 'double-float))
>>        (2epsilon (* 2 single-float-epsilon)))
>>   (print (< f1 (+ f1 epsilon)))  ;; should be TRUE
>>   (print (< epsilon 2epsilon))   ;; should be TRUE
>>   ;; add the two smaller and compare to the sum of the two larger
>>   (print (< (+ f1 epsilon)
>>      	    (+ (+ f1 epsilon) 2epsilon)));; not smaller anymore!
>>   ;; but it's even worse!  If my predictions are true, then it's 
>>   ;; actually >
>>   (print (> (+ f1 epsilon)
>> 	    (+ (+ f1 epsilon) 2epsilon))))

>Indeed, your sample prints  T T T NIL  in CMUCL but  T T NIL T  in CLISP.
>This is surprising, but I tend to think that the problem in this code
>are the comparisons. In numerical algorithms with floating-point numbers
>I normally don't write  (= x y)  but  (< abs (- x y) epsilon), and similarly
>for the other comparisons, and then the problem your code demonstrates goes
>away.

a) I agree with your high level point: if people are careful and aware
of the contagion rules, then *no* contagion rule is worse than
another.  However, this is still a "surprise" for the unwary if one
uses the "coerce to least precise" contagion for arithmetic and coerce
to most precise for comparisons.

b) The question posed was simply whether there was any sense in which
"coerce to more precise" was *better* than "coerce to less precise",
rather than just "it's a standard". I provide an example (given my
assumption (apparently correct) about comparison contagion).

c) It's not quite as easy as you might think to get rid of this
surprise.  Introducing =, <, > etc. that consider two floats to be
equal if they are within one part in precision can destroy
transitivity. (i.e.  
(defun float-= (x y epsilon)
 (< (abs (/ (max x y) (min x y))) (+ 1 epsilon))).  
Your example, above, isn't sufficient, since epsilon must be a
function of x and y) 
Again, one must be careful and keep epsilon constant across *all*
terms in the computation.  This effectively means you have to choose
your precision once and stick with it.

Maybe the simplest thing would be to introduce coerce-to-least-precise
in comparisons, too?  You might then get a+1 <= x <= a-1, but at least
those paradox's involve equality, and not converting < to > !

I'm agnostic on the issue, merely pointing out one effect of your
choice of contagion rules.
From: Gareth McCaughan
Subject: Re: floating point contagion; was: Re: Free implementation for Windows 95
Date: 
Message-ID: <86sorzsb31.fsf@g.pet.cam.ac.uk>
Sam Shteingold wrote:

[I'd asserted that longer float types are bigger and slower]
> *SOMETIMES* this is *FALSE*. On the Intel architecture, the FPU
> internally works with doubles no matter what. So if you have singles, you
> save RAM but loose speed, as your singles have to be converted to
> doubles before each computation and then the result is converted to a
> single again.

Yes, true. I'm guilty of oversimplification rather than stupidity
here...

> The floats are approximating some actual real numbers. You are saying
> that 
> 	(0.1+-0.05) + (0.123+-0.0005) = 0.223 +- 0.0505
> should be represented as 0.2+-0.05
> You are right, if your goal is to make sure that the errors are kept
> correctly and the user is not misled into thinking that the answer is
> 0.223+-0.0005. But 0.223 is closer to the actual answer!

*click* Quite true, and I'd forgotten that. (Which is embarrassing
because quite recently I've been thinking about the same issue in
other contexts.)

I won't claim to be convinced that ANSI CL contagion is the Right
Thing (since there are arguments on both sides, and actually I still
think CLISP-style contagion is better), but I entirely withdraw my
assertion that it's clearly the Wrong Thing.

> CLISP-style contagion claims to be able to infer accuracy from the
> precision, which is not always a good idea, and it gives the user no
> option to override it. 0.5 is not always +-0.05. It might an exact
> number! 

Well, I'd suggest that if you want the *exact* number 0.5 then you
should use the perfectly good CL number 1/2.

> Whenever a computation involves a loss of precision.  Suppose X is a
> number which can be represented exactly as a double *and* a single, but
> whose square requires a double to represented precisely.  If you put X
> into a single, (= (sqrt (* x x)) x) => nil, while if X is a double, it
> will be T.  Not a big deal?  Sometimes getting -0.00005 instead of 0.0
> makes a difference, especially if the next step is sqrt. 

Indeed. On the other hand: if the numbers in question were supplied
directly in the source, then it's easy to make them doubles, and if
they were computed from other values (in single precision) then you
no longer have any good reason to expect any accuracy from them. This
whole issue only arises when you enter values as singles instead
of as doubles because you know that they can be represented accurately
as singles, which to my mind is a disgusting hack. (This is why I
still think CLISP-style contagion is better than ANSI-style, leaving
aside all questions of standardness. If you want to do calculations
using more precision than the numbers you feed them, then I'd prefer
it to be necessary to say so.)

> I was hard hit by this when my constants like 0.1 were interpreted as
> singles and ruined a computation. I had 
>   (setq *read-default-float-format* 'double-float
> 	*default-float-format* 'double-float)
> and it turned out that is should have been enclosed in
>   (eval-when (compile load eval))
> This would not have been necessary if CLISP followed the standard.

Indeed. (But I never argued that being standard isn't a good thing;
I was just querying your use of the word `broken' in preference to
`non-standard'. Of course CLISP's contagion is non-standard; it's
not at all clear that it's broken in any other sense.)

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Pierpaolo Bernardi
Subject: Re: floating point contagion; was: Re: Free implementation for Windows 95
Date: 
Message-ID: <67appu$go1$1@croci.unipi.it>
Gareth McCaughan (·····@dpmms.cam.ac.uk) wrote:
: Sam Shteingold wrote:

: [I'd asserted that longer float types are bigger and slower]
: > *SOMETIMES* this is *FALSE*. On the Intel architecture, the FPU
: > internally works with doubles no matter what. So if you have singles, you
: > save RAM but loose speed, as your singles have to be converted to
: > doubles before each computation and then the result is converted to a
: > single again.

Not so in Clisp.  Clisp's long-floats have arbitrary precision.  They
may have thousands of digits.  The choice of coercing to the smaller
format is dictated by this fact.

Pierpaolo.
From: Raymond Toy
Subject: Re: floating point contagion; was: Re: Free implementation for Windows 95
Date: 
Message-ID: <4nen3jtp6r.fsf@rtp.ericsson.se>
>>>>> "Sam" == Sam Shteingold <···@remove-this-junk.usa.net> writes:

    Sam> *SOMETIMES* this is *FALSE*. On the Intel architecture, the FPU
    Sam> internally works with doubles no matter what. So if you have singles, you

I don't think this is true.  You can set the precision bits in the
control word to do single, double, or extended arithmetic.

    Sam> CLISP-style contagion claims to be able to infer accuracy from the
    Sam> precision, which is not always a good idea, and it gives the user no
    Sam> option to override it. 0.5 is not always +-0.05. It might an exact
    Sam> number! 

If you are set on doing CL style contagion, I might still have a patch 
or notes on how to modify CLISP to that type of contagion.  You can
probably find it in the clisp mail archives too.  Look for something
about 2 years ago.

    Sam> Whenever a computation involves a loss of precision.  Suppose X is a
    Sam> number which can be represented exactly as a double *and* a single, but
    Sam> whose square requires a double to represented precisely.  If you put X
    Sam> into a single, (= (sqrt (* x x)) x) => nil, while if X is a double, it
    Sam> will be T.  Not a big deal?  Sometimes getting -0.00005 instead of 0.0
    Sam> makes a difference, especially if the next step is sqrt. 

If it makes a difference, you better rethink your algorithms to be
less sensitive to round off.  You will have to deal with it no matter
what precision you have.

    Sam> I was hard hit by this when my constants like 0.1 were interpreted as
    Sam> singles and ruined a computation. I had 
    Sam>   (setq *read-default-float-format* 'double-float
    Sam> 	*default-float-format* 'double-float)
    Sam> and it turned out that is should have been enclosed in
    Sam>   (eval-when (compile load eval))
    Sam> This would not have been necessary if CLISP followed the standard.

I think your computation was ruined in any case since 0.1 has no exact 
binary representation so (* 0.1f0 x) is not the same as (* 0.1d0 x).

For finding these kinds of errors CLISP is reasonable.  Even better
would be something like CMUCL with high optimization because it will
warn you when you've done this mixed precision arithmetic.  I know,
this isn't an option for you.

Ray
From: Russell Senior
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <867m9a9npx.fsf@coulee.tdb.com>
>>>>> "Gareth" == Gareth McCaughan <·····@dpmms.cam.ac.uk> writes:

>> I consider this to be a major bug in CLISP, which, actually, IMHO,
>> disqualifies CLISP from ever being called a CL. "CLISP is a CL with
>> many advanced features missing and broken floating contagion".

Gareth> I'm puzzled. A floating-point number is basically a number
Gareth> about which all that's known is that it lies within some
Gareth> interval or other, and using a more precise FP type amounts to
Gareth> taking more space and time to make the interval smaller. The
Gareth> ANSI CL rules say that when you e.g. add together a bunch of
Gareth> FP values of differing levels of uncertainty, you should take
Gareth> the *smallest* of those intervals for the result.

You are presuming that the precision of representation is a good
surrogate for the uncertainty of the value.  When analysing
measurement data, this is usually an erroneous assumption.  In my
view, the proper role for the computer system is to maintain as much
precision as is practical and let the user decide how much is
relevant.  Otherwise, why stop at single precision?  

Gareth> This is certainly standard, but it seems to me that it's also
Gareth> certainly wrong. I can't think of any case in which it's the
Gareth> Right Thing.

Gareth> Certainly CLISP is broken in the sense of violating the
Gareth> standard; but you evidently mean something more than that. Can
Gareth> you give an example in which CLISP's behaviour is actually
Gareth> harmful?

I can think of two.  

First is reversability.  Invert a process that didn't maintain enough
precision and you aren't going to be able to get back the thing you
started with.  Throwing away extra precision is easy.  Getting
precision that has been thrown away is not.  For example, what should
the following expression return?

   (* 5.001 (/ 1.00000000001d0 5.001))

The second is that the standard error of the mean gets smaller (i.e.,
the precision of the estimate improves) as the number of measurements
in the sample increases, because of the tendency of the random errors
(the differences between the `true' value and the bounds defined by
the representation) to balance themselves out.  If you are computing
the average of a million single-precision values, your uncertainty of
the correct result would be better than single-precision, is that not
so?

There are good discussions of these issues in the following
publications:

  _Numerical Methods for Scientists and Engineers, 2ed_, RW Hamming,
      1973, Chapter 2. 

  _An Introduction to Error Analysis_, JR Taylor, 1982, University
      Science Books.

  _Data Reduction and Error Analysis for the Physical Sciences, 2ed_,
      PR Bevington & DK Robinson, 1992, McGraw Hill.

-- 
Russell Senior
·······@teleport.com
From: Kent M Pitman
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <sfwiusvmvrh.fsf@world.std.com>
Sam Shteingold <···········@cctrading.com> writes
(in what I think is a response to Bruno Haible):

>  >>     and the floating
>  >>     point contagion rule in ANSI CL goes the wrong way around),

Personally, I think what ANSI CL does is mathematically incorrect according
to what I'd expect, also.  But it's not the first language to do it.
It also bugs me that CL defines COMPLEX as disjoint from rather than a
superset of REAL.  But so it goes.  I have always stayed clear of Float
requirements discussions because I don't actually USE floats for anything.
I figure those who actually care are the ones who should define what's good
since it's all approximation work anyway ... me, I stick to rationals (and
domains that mostly need nothing else).

> It goes the same way all the other languages I ever used do, the IEEE
> standard way, which preserves precision and lets the *user* control
> accuracy, as opposed to your way, which screws the precision and imposes
> its own (*sometimes* wrong!) idea of accuracy upon the programmer.
> 
> I consider this to be a major bug in CLISP, which, actually, IMHO,
> disqualifies CLISP from ever being called a CL. "CLISP is a CL with many
> advanced features missing and broken floating contagion". [I never use
> this derogatory expression out of reluctance to insult the author (whom
> I deeply respect), but the fact remains. I have been severely bitten by
> this bug some time ago, and Bruno kindly promised to issue a warning
> whenever his contagion is used. I would prefer a compile time option
> which would make CLISP use the IEEE contagion though.] One might be
> tempted to soften the expression to read "nonstandard" or "unusual"
> contagion, but this makes it sound as if contagion were something
> optional, which it is *not*.

Yeah, a part of the problem turns out to be that programmers are
separately managing precision computations and so all they want out of
their calculations is to blindly do as told and not infer additional
meaning.

Sometimes, for example, people want to use 0.5s0 not because it's less
precise than 0.5d0 but because it's more compact.  And so multiplying
it times a double precision might really lose no precision.  I'd
rather the language said to use 0.5d0 but it doesn't.

> This is probably of no importance for you, but, IMHO, no programming
> language with your version of floating contagion has any chance to
> succeed in the industry - not because the people in the industry are
> morons, but because the loss of precision is simply unacceptable.

Well, in fairness, I think the precision can be obtained in either set
of data flow by appropriately placed coercions or by appropriate
assumptions about different defaults for the coercions.  The important
thing isn't that industry did it right and that new languages should
adopt that, but that industry made some pragmatic choices and now that
those choices are widely accepted as the normal way of thinking about
problems, it's disruptive to seek to mess with long-established intuitions.
It leads the same place perhaps, but it avoids any need to think of
anyone on either side of the fence as a moron.
From: Pierpaolo Bernardi
Subject: Re: Free implementation for Windows 95
Date: 
Message-ID: <66rlgt$d14$1@croci.unipi.it>
Sam Shteingold (···········@cctrading.com) wrote:
: CLISP (ftp://ma2s2.mathematik.uni-karlsruhe.de/pub/lisp/clisp/)
: maintained by Bruno Haible (······@ma2s2.mathematik.uni-karlsruhe.de)
: is a free (GPLed) implementation of Common Lisp. It runs on any platform
: you can imagine (I run it on winnt), and is quite small (2MB of RAM is
: enough). It has disadvantages though:
: 1. The author refuses to make it ANSI CL compliant (CLtL1 only).

Usually Bruno is happy to include patches that make Clisp more
useful. Have you had any patches rejected?

P.