From: Ossi Herrala
Subject: Floating-point arithmetic in CL
Date: 
Message-ID: <cnslkt$k7g$1@news.oulu.fi>
Hello List 

And thanks to all of you that I'm now addicted of studying and learning
Common Lisp. Great language! :)

Anyway, I encountered some strange behaviour:

In LispWorks:

> (- 0.9 0.5)
0.4

But in CLisp and CMUCL:

> (- 0.9 0.5)
0.39999998

(On a side note, this happens in Python also:

>>> 0.9 - 0.5
0.40000000000000002
)

I once heard a (good) explanation why this happens with floating-point
numbers, but I feel this is really wrong.

Is there anything that CL implementations (Clisp and CMUCL) should do
differently? Or should I do something (some kind of work around?) to
get correct and accurate results?

For example this is ugly work around:

> (/ (- (* 0.9 100) (* 0.5 100)) 100)
0.4

But it works in CLisp, CMUCL and LispWorks. Not that I accept such
hacks ;)

-- 
Ossi Herrala, OH8HUB
PGP: 0x78CD0337 / D343 F9C4 C739 DFFF F619  6170 8D28 8189 78CD 0337 

Hi! I am a .signature virus.  Copy me into your .signature to join in!

From: ·········@gmail.com
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <1101158669.640992.81050@f14g2000cwb.googlegroups.com>
Try this code:

(defmacro define-exact-binary-operation (name)
(let ((x (gensym))
(y (gensym)))
`(defun ,(intern (format nil "~AE" name)) (,x ,y)
(float (,name (rationalize ,x)
(rationalize ,y))))))

(define-exact-binary-operation +)
(define-exact-binary-operation -)
(define-exact-binary-operation *)
(define-exact-binary-operation /)
;; insert any more you want done with rationals inside

(- 0.9 0.5) ; inexact; darn floats
(-e 0.9 0.5) ; As exact as I can get it while still dealing with floats

If I made a mistake here, I invite anybody better-versed in
floating-point arithmetic to point it out to me. It's a definite
possibility.

-Peter Scott
From: David Sletten
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <nOtod.57922$hN1.34434@twister.socal.rr.com>
·········@gmail.com wrote:
> Try this code:
> 
> (defmacro define-exact-binary-operation (name)
> (let ((x (gensym))
> (y (gensym)))
> `(defun ,(intern (format nil "~AE" name)) (,x ,y)
> (float (,name (rationalize ,x)
> (rationalize ,y))))))
> 
> (define-exact-binary-operation +)
> (define-exact-binary-operation -)
> (define-exact-binary-operation *)
> (define-exact-binary-operation /)
> ;; insert any more you want done with rationals inside
> 
> (- 0.9 0.5) ; inexact; darn floats
> (-e 0.9 0.5) ; As exact as I can get it while still dealing with floats
> 
> If I made a mistake here, I invite anybody better-versed in
> floating-point arithmetic to point it out to me. It's a definite
> possibility.
> 
> -Peter Scott
> 

This looks reasonable since CLHS says:
"rationalize returns a rational that approximates the float to the 
accuracy of the underlying floating-point representation."

But you're really only glossing over the problem. It is _impossible_ to 
represent _most_ numbers exactly using a finite number of bits (in any 
number base). Consequently, the mapping from reals to floats is not 
injective (1:1). So while Lisp may try its darndest to figure out which 
number you 'really' meant based on the approximation it gets, there will 
not be a unique number that generated that approximation. For instance,
1/9 = 720575940379279360/6485183463413514240. However, this is stored as 
a double-precision float in the same way as every number with a 
numerator that differs only in the last 3 digits from 275 to 365. Thus:
(rationalize (float 720575940379279275/6485183463413514240 1d0)) => 1/9
Obviously, all of these numbers are 'very near' 1/9 for most purposes. 
And your comment above suggests that you understand that as long as 
you're working with floats you can't expect completely exact results. 
However, those using your example may not.

On a practical note, you go to the trouble of simulating exact 
arithmetic, yet in the end you coerce your answer to a single-precision 
float! What you want instead is either:
(float f 1d0) or (coerce f 'double-float)

David Sletten
From: Christophe Rhodes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <sq1xelzefo.fsf@cam.ac.uk>
·········@gmail.com writes:

> Try this code:
>
> (defmacro define-exact-binary-operation (name)
> (let ((x (gensym))
> (y (gensym)))
> `(defun ,(intern (format nil "~AE" name)) (,x ,y)
> (float (,name (rationalize ,x)
> (rationalize ,y))))))
> [...]
> (- 0.9 0.5) ; inexact; darn floats
> (-e 0.9 0.5) ; As exact as I can get it while still dealing with floats
>
> If I made a mistake here, I invite anybody better-versed in
> floating-point arithmetic to point it out to me. It's a definite
> possibility.

It's not the arithmetic that's inexact (in this case); it's reading in
a decimal number.

0.9 reads in as the float with value 7549747/8388608;
0.5 reads in as the float with value 1/2;

(- 0.9 0.5) evaluates to the float which prints as 0.39999998, which
has value 3355443/8388608, which is the same value as that returned by
(- 7549747/8388608 1/2).

Ironically, your "exact binary operation" is inexact, through the use
of RATIONALIZE...

Christophe
From: Thomas A. Russ
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <ymimzx76pqg.fsf@sevak.isi.edu>
Christophe Rhodes <·····@cam.ac.uk> writes:

> 
> ·········@gmail.com writes:
> 
> > Try this code:
> >
> > (defmacro define-exact-binary-operation (name)
> > (let ((x (gensym))
> > (y (gensym)))
> > `(defun ,(intern (format nil "~AE" name)) (,x ,y)
> > (float (,name (rationalize ,x)
> > (rationalize ,y))))))
> > [...]
> > (- 0.9 0.5) ; inexact; darn floats
> > (-e 0.9 0.5) ; As exact as I can get it while still dealing with floats
> >
> > If I made a mistake here, I invite anybody better-versed in
> > floating-point arithmetic to point it out to me. It's a definite
> > possibility.
> 
> It's not the arithmetic that's inexact (in this case); it's reading in
> a decimal number.
> 
> 0.9 reads in as the float with value 7549747/8388608;

Hmmm.  When I try this I get

 (rationalize 0.9) =>  9/10

using ACL 5.0, MCL 5.0, openmcl 0.13.6 and CMUCL 18c. 

It is only when I use RATIONAL instead of RATIONALIZE that I get the
fraction 7549747/8388608 for a single float.  I get a different number
for a double float, but the same 9/10 for either with rationalize.

The key is the difference between RATIONAL and RATIONALIZE, which is
perhaps a bit subtle.  The former gives an exact fraction from the
floating point representation.  The latter is supposed to give the
"nicest" fraction whose floating point value is indistinguishable from
the given float.

> 0.5 reads in as the float with value 1/2;
> 
> (- 0.9 0.5) evaluates to the float which prints as 0.39999998, which
> has value 3355443/8388608, which is the same value as that returned by
> (- 7549747/8388608 1/2).
> 
> Ironically, your "exact binary operation" is inexact, through the use
> of RATIONALIZE...

Either you didn't actually try it, or you are using a Lisp that
implements RATIONAL and RATIONALIZE the same way.

> 
> Christophe

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Christophe Rhodes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <sq3byyzx3a.fsf@cam.ac.uk>
···@sevak.isi.edu (Thomas A. Russ) writes:

> Christophe Rhodes <·····@cam.ac.uk> writes:
>
>> Ironically, your "exact binary operation" is inexact, through the use
>> of RATIONALIZE...
>
> Either you didn't actually try it, or you are using a Lisp that
> implements RATIONAL and RATIONALIZE the same way.

You misunderstand, or maybe you didn't actually read what I wrote.

There are two stages involved in the computation.  The first is
converting the string "(- 0.9 0.5)" into a lisp program, and the
second is executing it.  In Lisps with IEEE754 single floats as the
default floating point format, the reading phase results in

  +---+---+
  | - | . |
  +---+-+-+
        |  +---+---+
        +->| A | . |
           +---+-+-+
                 |  +---+---+
                 +->| B |,-'|
                    +---+---+

where A is the IEEE single floating point number with value
approximately 0.8999999761581421 (exact value 7549747/8388608) and B
is the IEEE single floating point number with value 0.5 (exactly 1/2).

The second stage is executing this program, which returns the IEEE
single floating point number 0.39999998 (exact value 3355443/8388608),
which happens the mathematically exact answer to the subtraction.  So,
in this case (and in many other cases as well) the mathematical
operator cl:- acting on two single floats has returned the
mathematically exact answer given its two operands.  The apparent
inexactitude in the answer comes from the non-uniform non-dense space
of floating point objects: there are more single floats between 0.0
and 0.5 than there are between 0.5 and 1.0.

The fact that the use of RATIONALIZE causes the "right" answer to come
out of this two stage process is no more than an accident, I'm afraid:
while it gets "(- 0.9 0.5)" 'right', it gets "(- 0.899999976 0.5)"
wrong.  In the specific case in question, the inexactitude that
RATIONALIZE introduces happens to cancel out the inexactitude caused
by converting the program text into Lisp objects, but the fact remains
that the use of RATIONALIZE causes the 'exact binary operation' in the
grandparent to this article to be an inexact operation.

Christophe
From: Pascal Bourguignon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <878y8tpjjs.fsf@thalassa.informatimago.com>
·········@gmail.com writes:

> Try this code:
> 
> (defmacro define-exact-binary-operation (name)
> (let ((x (gensym))
> (y (gensym)))
> `(defun ,(intern (format nil "~AE" name)) (,x ,y)
> (float (,name (rationalize ,x)
> (rationalize ,y))))))

Well, there's  absolutely no need to  use gensyms there  since name is
used as a function and x and y as values.

    (defmacro define-exact-binary-operation (name)
        `(defun ,(intern (format nil "~AE" name)) (x y)
            (float (,name (rationalize x) (rationalize y)))))

would work even if you called:

    (define-exact-binary-operation x)

> (- 0.9 0.5) ; inexact; darn floats
> (-e 0.9 0.5) ; As exact as I can get it while still dealing with floats
> 
> If I made a mistake here, I invite anybody better-versed in
> floating-point arithmetic to point it out to me. It's a definite
> possibility.

You're taking useless steps to compute a result that is not more
precise (even if it displays as you want).

For all purposes, 0.3999998 == 0.4 = 0.40000004
You should never compare floating point value for equality!

     (defun float-equal (a b epsilon) (< (abs (- a b)) epsilon))
     (float-equal (- 0.9 0.5) 0.4 0.0001) ==> T


The point is that when you add or substract two floating point values:

    v_1�epsilon_1 - v_2�epsilon_2 

you don't get v_1-v_2, you get:

    (v_1-v_2)�(epsilon_1+epsilon_2)

You want to have the computer display v_1-v_2, but the real value
could be anything between:

    v_1-v_2-(epsilon_1+epsilon_2) and v_1-v_2+(epsilon_1+epsilon_2)

so it does not matter what is displayed.


Perhaps hardware manufacturer should forget floating point and only
provide range arithmetic.  It would not be slower, in hardware.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The world will now reboot; don't bother saving your artefacts.
From: Jens Axel Søgaard
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <41a1e24d$0$262$edfadb0f@dread12.news.tele.dk>
Ossi Herrala wrote:

> I once heard a (good) explanation why this happens with floating-point
> numbers, but I feel this is really wrong.

If you want the full explanation see the classic article "What Every
Computer Scientist Should Know About Floating-Point Arithmetic":

     <http://docs.sun.com/source/806-3568/ncg_goldberg.html>

-- 
Jens Axel Søgaard
From: Tim Bradshaw
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <1101151627.050684.134880@f14g2000cwb.googlegroups.com>
Jens Axel Søgaard wrote:
> If you want the full explanation see the classic article "What Every
> Computer Scientist Should Know About Floating-Point Arithmetic":
>
>      <http://docs.sun.com/source/806-3568/ncg_goldberg.html>
>

This question (or a variant of it) seems to come up pretty frequently
in CLL.  I can really only see a couple of reasons why:

1. People attracted to lisp are stupid or uneducated;
2. It's possible to become `educated' in computer science while
having no understanding at all of floating point arithmetic.

I don't believe (1), so I'm left with (2).  Is this really the case?
If it is, then it confirms my view that most people designing computer
science curricula should be melted down for glue.

--tim
From: Alan Shutko
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87brdp8r2q.fsf@wesley.springies.com>
"Tim Bradshaw" <··········@tfeb.org> writes:

> 1. People attracted to lisp are stupid or uneducated;

I think this is the real reason.  Not that people are stupid, but
there are a whole lot of people working in software development who
are uneducated.  Few of the folks I've worked with have a CS
degree... most had an unrelated Engineering/Physics/Math/anything else
degree but picked up programming and ended up in software
development.  I know one guy who ended up as a team lead who's now
going back for a masters in CS so he can _get_ some of the background
he missed.

And these days, there are a lot of people who pick up programming by
picking up Perl, Python, or the like.  I believe both default to
maximum precision floating point by default.  The questions we see
here are people who've heard that lisp is cool, but end up with a
implementation that supports different floating point precisions....


-- 
Alan Shutko <···@acm.org> - I am the rocks.
It's Unfair!: Y. Me
From: Jens Axel Søgaard
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <41a2784a$0$195$edfadb0f@dread12.news.tele.dk>
Alan Shutko wrote:
> "Tim Bradshaw" <··········@tfeb.org> writes:

>>1. People attracted to lisp are stupid or uneducated;

> I think this is the real reason.  Not that people are stupid, but
> there are a whole lot of people working in software development who
> are uneducated.  Few of the folks I've worked with have a CS
> degree... most had an unrelated Engineering/Physics/Math/anything else
> degree but picked up programming and ended up in software
> development.  I know one guy who ended up as a team lead who's now
> going back for a masters in CS so he can _get_ some of the background
> he missed.


The actual question isn't just typical for Lisp-newbies.
The question is often seen in connection with the classical
"Build a calculator"-exercise. Just try a few of the home
made JavaScript-calculators...

-- 
Jens Axel Søgaard
From: Tim Bradshaw
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <1101234618.091293.103390@c13g2000cwb.googlegroups.com>
Alan Shutko wrote:
> "Tim Bradshaw" <··········@tfeb.org> writes:
>
> > 1. People attracted to lisp are stupid or uneducated;
>
> I think this is the real reason.  Not that people are stupid, but
> there are a whole lot of people working in software development who
> are uneducated.  Few of the folks I've worked with have a CS
> degree... most had an unrelated Engineering/Physics/Math/anything
else
> degree but picked up programming and ended up in software
> development.  I know one guy who ended up as a team lead who's now
> going back for a masters in CS so he can _get_ some of the background
> he missed.
>

Well, I have an unrelated degree in physics, and I seriously doubt
that people with standard hard-science backgrounds would have issues
with floating point.  I was pretty much a hard-core theoretical
physics person, but I did some practical stuff in my first year (as
everyone did) and you really can't get away from the issues of
measurement and calculation precision, because they're important (even
to theorists: after all theorists are trying to predict and explain
the results of experiments).  And engineers are even more aware of
measurement and precision issues than physicists, of course.

Personally, I think that it's a combination of people with non-science
backgrounds, and the dismal failure of CS education (and I don't mean
CS education in non-first-rank universities, I mean all of it).

--tim
From: Majorinc, Kazimir
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <MPG.1c0ea286bd4319b7989726@news.carnet.hr>
I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense. 
From: Paul F. Dietz
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <XPidnSkyafDBETncRVn-og@dls.net>
Majorinc wrote:

> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense. 

Are you saying that that is *not* caused by the finite precision
of floating point arithmetic?

	Paul
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <Ps-dncMZCsaCXjncRVn-hg@golden.net>
Paul F. Dietz wrote:
> Majorinc wrote:
> 
>> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense. 
> 
> 
> Are you saying that that is *not* caused by the finite precision
> of floating point arithmetic?

It's worth pointing out that the original poster didn't ask for infinite 
precision arithmetic.

My $5 desk calculator gets the right answer, as would most nine year 
olds. Even my C compiler seems to. This is one of those instances where 
a computer science education can convince you that the answer that 
everyone thinks is wrong is actually "right" given a set of underlying 
assumptions and trade offs.

32 bits as the default size of a float is probably a poor choice for 
general purpose computing in 2004*. Computer scientists should recognize 
that the majority of people who program computers today are not computer 
scientists, nor are the people who analyze numbers trained numerical 
analysts. If this requires smarter programming environments that can do 
elementary numerical analysis on algorithms in order to offer 
suggestions to the coder, then so be it.

We've come a long way since the typical user of computers looked at the 
numbers scrolling out of his LA-78 and thought "Well, I couldn't get any 
closer with my slide rule."


* - I know, adding mantissa doesn't help when the algorithm is flawed, 
but it looks better.
From: Thomas A. Russ
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <ymiis7v6n6b.fsf@sevak.isi.edu>
Cameron MacKinnon <··········@clearspot.net> writes:

> 
> Paul F. Dietz wrote:
> > Majorinc wrote:
> > 
> >> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense. 
> > 
> > 
> > Are you saying that that is *not* caused by the finite precision
> > of floating point arithmetic?
> 
> It's worth pointing out that the original poster didn't ask for infinite 
> precision arithmetic.

Oddly enough, the architectural push in computer design is away from
decimal arithmetic and toward binary arithmetic.  Speed of the
computation has been generally much more highly prized than accuracy of
the computation (cue C vs Lisp war).

Early IBM mainframe computers used BCD arithmetic and had, I believe, at
least some hardware support for that.  Currently I don't know of any
hardware support and not even any generally available libraries for this
(excepting Java's BigDecimal library).

> My $5 desk calculator gets the right answer, as would most nine year 
> olds. Even my C compiler seems to. This is one of those instances where 
> a computer science education can convince you that the answer that 
> everyone thinks is wrong is actually "right" given a set of underlying 
> assumptions and trade offs.

But is that because it really has the right answer or because it rounds
to a number of digits that mask the error for the reasonably small chain
of computations that it is displaying?

A quick test using double precision and 10 digits of display when
repeatedly adding 0.001 or 0.9 didn't show any extraneous digits with up
to 100000 repetitions, so it may really be masked by presentation.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <3b2dndTDFrHynjjcRVn-rQ@golden.net>
Thomas A. Russ wrote:

> Oddly enough, the architectural push in computer design is away from
> decimal arithmetic and toward binary arithmetic.  Speed of the
> computation has been generally much more highly prized than accuracy of
> the computation (cue C vs Lisp war).

I get the impression that nothing much has happened since IEEE: Hardware 
makers (especially gun-shy since Intel's Pentium FPU PR debacle) and 
software writers have decided to live with the devil they know rather 
than innovate. I could be wrong.

>>My $5 desk calculator gets the right answer, as would most nine year 
>>olds. Even my C compiler seems to. This is one of those instances where 
>>a computer science education can convince you that the answer that 
>>everyone thinks is wrong is actually "right" given a set of underlying 
>>assumptions and trade offs.
> 
> 
> But is that because it really has the right answer or because it rounds
> to a number of digits that mask the error for the reasonably small chain
> of computations that it is displaying?

I'm certain that my calculator saws off the ends of numbers, and could 
be made to show it. But I also know that, were a calculator R&D engineer 
to show a model that asserted 0.9 - 0.5 = 0.39999998, the marketing 
department would demand a fix. Convincing someone of the accuracy of a 
machine that comes out with such obviously wrong answers for such common 
inputs is a losing proposition.
From: Hartmann Schaffer
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <V8tpd.15801$Su4.15825@newscontent-01.sprint.ca>
Cameron MacKinnon wrote:
> ...
> I'm certain that my calculator saws off the ends of numbers, and could 
> be made to show it. But I also know that, were a calculator R&D engineer 
> to show a model that asserted 0.9 - 0.5 = 0.39999998, the marketing 
> department would demand a fix. Convincing someone of the accuracy of a 
> machine that comes out with such obviously wrong answers for such common 
> inputs is a losing proposition.

don't most (all?) calculators use decimal arithmetic?  try some sequence 
of operations that include dividing a number that is not a multiple of 3
by 3.  no doubt you'll find enough arguments to convince the marketing 
department to peddle back on their demand

a long time ago i tried (out of curiosity) a lengthy sequence of 
operations using trig functions.  i ended up with rounding errors.

the fact is that due to the nature of floating point you end up with
errors as soon as you use numbers that aren't exactly representable

hs
From: Tim Bradshaw
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <1101900439.974091.19890@z14g2000cwz.googlegroups.com>
Hartmann Schaffer wrote:

> don't most (all?) calculators use decimal arithmetic?  try some
sequence
> of operations that include dividing a number that is not a multiple
of 3
> by 3.  no doubt you'll find enough arguments to convince the
marketing
> department to peddle back on their demand
>

I think that any calculator worth its salt uses decimal and also
several guard digits (so may be 13 decimal digits on a 10 digit
display or something), so errors take some time to propagate to the
numbers you see, as well as a lot of very carefully designed
algorithms to give expected results.  There used to be a lot of stuff
around about how the HP48 did things, and it was really very careful
indeed.  And very, very slow of course.

Half the problem that people are seeing is because the default format
for printing things in programming languages tends to be `show all the
bits'.  The default display mode for HP calculators was also something
like 4 significant digits, on an underlying 13 (maybe) digit
calculation.

--tim
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87ekiaau70.fsf@david-steuber.com>
"Tim Bradshaw" <··········@tfeb.org> writes:

> There used to be a lot of stuff around about how the HP48 did
> things, and it was really very careful indeed.  And very, very slow
> of course.

I still have and use an HP48.  I have not taken the plunge of writing
programs for it.  And asking it for a Taylor series (something I've
forgotten due to lack of use) does require some patience.

For unit conversions, the HP48 is awesome.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Thomas A. Russ
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <ymi7jo17q2t.fsf@sevak.isi.edu>
"Tim Bradshaw" <··········@tfeb.org> writes:

> There used to be a lot of stuff
> around about how the HP48 did things, and it was really very careful
> indeed.  And very, very slow of course.

Of course for most operations on a hand calculator, the answer is
certainly fast enough to keep up with keying in the operations, so speed
of algorithms wasn't a primary criterion.

Then again, my first scientific calculator had a little "busy" light
that came on when computing trigonometric functions and logarithms since
the delay was on the order of several seconds....

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Barry Margolin
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <barmar-DF70D0.20332701122004@comcast.dca.giganews.com>
In article <·······················@z14g2000cwz.googlegroups.com>,
 "Tim Bradshaw" <··········@tfeb.org> wrote:

> Hartmann Schaffer wrote:
> 
> > don't most (all?) calculators use decimal arithmetic?  try some
> sequence
> > of operations that include dividing a number that is not a multiple
> of 3
> > by 3.  no doubt you'll find enough arguments to convince the
> marketing
> > department to peddle back on their demand
> >
> 
> I think that any calculator worth its salt uses decimal and also
> several guard digits (so may be 13 decimal digits on a 10 digit
> display or something), so errors take some time to propagate to the
> numbers you see, as well as a lot of very carefully designed
> algorithms to give expected results.  There used to be a lot of stuff
> around about how the HP48 did things, and it was really very careful
> indeed.  And very, very slow of course.
> 
> Half the problem that people are seeing is because the default format
> for printing things in programming languages tends to be `show all the
> bits'.  The default display mode for HP calculators was also something
> like 4 significant digits, on an underlying 13 (maybe) digit
> calculation.

What happens on most decimal calculators if you divide 1 by 3, and then 
multiply the result by 3?

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: William Bland
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pan.2004.12.02.19.36.23.323456@abstractnonsense.com>
On Wed, 01 Dec 2004 03:27:20 -0800, Tim Bradshaw wrote:

> Hartmann Schaffer wrote:
> 
>> don't most (all?) calculators use decimal arithmetic?  try some
> sequence
>> of operations that include dividing a number that is not a multiple
> of 3
>> by 3.  no doubt you'll find enough arguments to convince the
> marketing
>> department to peddle back on their demand
>>
> 
> I think that any calculator worth its salt uses decimal and also
> several guard digits (so may be 13 decimal digits on a 10 digit
> display or something)

I have to confess I haven't thought this through in depth, but with
today's cheapness and speed of silicon, couldn't calculators further
improve things by remembering the last n operations, and recalculating
them *after* a simplification step is done?  You would still get errors
eventually since the queue would be of finite length, but you wouldn't get
them as quickly.

Cheers,
	Bill.
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <usidnY_qwYmX7DLcRVn-gg@golden.net>
William Bland wrote:
> I have to confess I haven't thought this through in depth, but with
> today's cheapness and speed of silicon, couldn't calculators further
> improve things by remembering the last n operations, and recalculating
> them *after* a simplification step is done?  You would still get errors
> eventually since the queue would be of finite length, but you wouldn't get
> them as quickly.

I suspect that most calculators are still clocked by ultra-cheap 
32.768kHz "watch crystals" - they aren't getting faster because speedier 
crystals cost more. Neat idea, though.
From: Pascal Bourguignon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87fz2zkto5.fsf@thalassa.informatimago.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> Paul F. Dietz wrote:
> > Majorinc wrote:
> >
> >> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense.
> > Are you saying that that is *not* caused by the finite precision
> > of floating point arithmetic?
> 
> It's worth pointing out that the original poster didn't ask for
> infinite precision arithmetic.
> 
> My $5 desk calculator gets the right answer, as would most nine year
> olds. 

Strange.  When pocket calculators were introduced in French college,
teachers developed a corpus of calculation exercice such that pocket
calculators would return the wrong answer but the brain of a nine year
old would not...


 
> 32 bits as the default size of a float is probably a poor choice for
> general purpose computing in 2004*.

Yes and no.

Of course, when you compute some discreet stuff, you need more than
six significant digits.  One important topic here is that money often
has a decimal point, but it's not a floating point, it's a fixed
point!  floats should not be used for money.

But when you have to compute continuous stuff, first you can't on a
discreet computer. Then most ofteh you don't have enough significant
digits in the first place.

My physic teachers substracted points when you gave numerical results
with *in*significant digits!

Some physical constants are known up to 10, 12 or 15 digits, but they
are in the minority.  Most of them are not even known at more than 3
or 4 significant digits.  Why would you need more than six significant
digits on input / output floating point numbers?

(Note that some 64-bit FPU actually compute with 80-bit internally).


Where do your 0.9 and your 0.5 come from?
You really should use: (format t "~,2F" (- 0.9 0.5))
most of the time anyway.


> We've come a long way since the typical user of computers looked at
> the numbers scrolling out of his LA-78 and thought "Well, I couldn't
> get any closer with my slide rule."

The point is that the 3 significant digits of a slide rule are enough
to go to the Moon and  to do anything else in the physical world.



-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The world will now reboot; don't bother saving your artefacts.
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <v_mdnSBZx5KSbTncRVn-2Q@golden.net>
Pascal Bourguignon wrote:
> My physic teachers substracted points when you gave numerical results
> with *in*significant digits!

Mine too. So why don't our computers keep track of significant digits?

> Where do your 0.9 and your 0.5 come from?
> You really should use: (format t "~,2F" (- 0.9 0.5))
> most of the time anyway.

Wouldn't it be nice if floats by default printed with the number of 
significant digits appropriate to their calculation history?

While I agree with the essence of your argument that most of us don't 
need to do arithmetic to fifteen places most of the time, the OP's 
example looks like big lossage to me, even though I know the theory 
behind it. In summary: We have ten fingers, so we use fractions of ten 
more commonly than simple number theory would suggest. Choosing a 
computer representation which can't accurately represent our most common 
constants wasn't computer science's finest moment.

When it comes to integer math, Lispers are proud that their language 
doesn't claim that 2000000000 + 2000000000 = -294967296, so why is 
floating point lossage OK?

Why doesn't the = operator demand a precision argument when comparing 
floats?
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87r7mjoudq.fsf@qrnik.zagroda>
Cameron MacKinnon <··········@clearspot.net> writes:

> When it comes to integer math, Lispers are proud that their language
> doesn't claim that 2000000000 + 2000000000 = -294967296, so why is
> floating point lossage OK?

How would you represent (sqrt 2) exactly, and perform computations on it?

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <dL-dnZpcvNtZbjncRVn-pQ@golden.net>
Marcin 'Qrczak' Kowalczyk wrote:
> Cameron MacKinnon <··········@clearspot.net> writes:
> 
> 
>>When it comes to integer math, Lispers are proud that their language
>>doesn't claim that 2000000000 + 2000000000 = -294967296, so why is
>>floating point lossage OK?
> 
> 
> How would you represent (sqrt 2) exactly, and perform computations on it?

If I understand your argument, you're saying that I should accept 
lossage with numbers that CAN be represented exactly, because otherwise 
it would be unfair discrimination against numbers that can't be 
represented exactly. Maybe I just don't understand your argument.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87llcrhrpw.fsf@qrnik.zagroda>
Cameron MacKinnon <··········@clearspot.net> writes:

>> How would you represent (sqrt 2) exactly, and perform computations
>> on it?
>
> If I understand your argument, you're saying that I should accept
> lossage with numbers that CAN be represented exactly, because
> otherwise it would be unfair discrimination against numbers that
> can't be represented exactly.

You *can* represent rational numbers exactly. Just store them as
rationals, not as floating point.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <WvWdnXER8bn2pzvcRVn-sQ@golden.net>
Marcin 'Qrczak' Kowalczyk wrote:
> Cameron MacKinnon <··········@clearspot.net> writes:
> 
> 
>>>How would you represent (sqrt 2) exactly, and perform computations
>>>on it?
>>
>>If I understand your argument, you're saying that I should accept
>>lossage with numbers that CAN be represented exactly, because
>>otherwise it would be unfair discrimination against numbers that
>>can't be represented exactly.
> 
> 
> You *can* represent rational numbers exactly. Just store them as
> rationals, not as floating point.

Well then the default for the reader, upon encountering a base ten 
number with a decimal point, should be to create a rational. If the user 
wants binary floating point, he should specify his constants in 
hexadecimal, octal or binary. *ducks*


First we did floating point in software, on 8 and 16 bit CPUs. It was 
very, very slow. Programmers were *acutely* aware of the difference 
between using a constant with a decimal point versus one without.

Then we were offered coprocessors that were only very slow. Things were 
so slow that we gladly traded these little quirks of conversion and 
representation - *which were now fixed in hardware, rather than being at 
the option of the library writer*, for being able to add two floats in 
only 30x the time it took to add two integers. Cycles were expensive and 
memory was expensive, so the compromises looked appropriate.

These days, floats add at the same speed as integers. In the new 64 bit 
era, double floats and machine sized integers are the same size, so 
there's no space or speed penalty for using them. So it's only natural 
that "Learn Blub in 21 Days" no longer has a little stop sign in the 
margin with a note that using decimal numbers has space and speed 
ramifications, so use them only when necessary.

Little wonder, then, that those new to the art of programming are 
surprised when they naturally use numbers with a decimal point in them, 
get a wrong answer, and are told that the decimal point is an explicit 
signifier that the programmer cares more about speed than accuracy.
From: Hartmann Schaffer
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <XYspd.15797$Su4.15839@newscontent-01.sprint.ca>
Cameron MacKinnon wrote:
> ...
> These days, floats add at the same speed as integers.

i haven't checked out execution times for floating point instructions 
for a while, but i have my doubts about this statement.  last time i 
checked, there was a significant difference. and due to the nature of 
floating point arithmetic i can't see this difference go away (unless, 
of course, you deliberately slow down integer arithmetic)

> ...

hs
From: Geoffrey Summerhayes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <2zApd.50422$Ro.2111872@news20.bellglobal.com>
"Hartmann Schaffer" <··@hartmann.schaffernet> wrote in message ··························@newscontent-01.sprint.ca...
> Cameron MacKinnon wrote:
>> ...
>> These days, floats add at the same speed as integers.
>
> i haven't checked out execution times for floating point instructions for a while, but i have my doubts about this statement. 
> last time i checked, there was a significant difference. and due to the nature of floating point arithmetic i can't see this 
> difference go away (unless, of course, you deliberately slow down integer arithmetic)

Well first of using integers for floats mean managing the representation
manually, allocating additional space, calculating exponents, and/or shifting
mantissas depending on the representation.

On the float side FPU's are certainly faster than using the ALU to manage
an fp number. The other is pipeling, once an operation has been started
there is no reason to stall if the next op does not require the result.
Clever queuing of groups of FP operations can finish within a clock cycle
or two of their integer equivalents.

--
Geoff
From: Barry Margolin
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <barmar-1D7301.15364729112004@comcast.dca.giganews.com>
In article <······················@golden.net>,
 Cameron MacKinnon <··········@clearspot.net> wrote:

> Well then the default for the reader, upon encountering a base ten 
> number with a decimal point, should be to create a rational. If the user 
> wants binary floating point, he should specify his constants in 
> hexadecimal, octal or binary. *ducks*

CL's behavior is a holdover from Maclisp, which didn't have rational 
numbers.  The CL designers didn't want to make an incompatible change to 
the language regarding how numbers with decimal points are read, it 
would have broken lots of programs that were being ported to CL.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Andreas Eder
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <m3llcqrhc0.fsf@banff.eder.de>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> How would you represent (sqrt 2) exactly, and perform computations on it?

Oh, you can do that. In fact you can process algebraic numbers just
fine, if you represent them as their minimal polynomial together with
an approximation (rational) to distinguish the different roots.
There are some articles on the web about this and I vaguely remember
reading a (french ?) dissertation about that topic. In fact a short
google search brings up quite a lot of references an papers.

Andreas
-- 
Wherever I lay my .emacs, there's my $HOME.
From: Bruno Haible
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <co4npt$qfb$1@laposte.ilog.fr>
Cameron MacKinnon wrote:
> So why don't our computers keep track of significant digits?
> ...
> Wouldn't it be nice if floats by default printed with the number of 
> significant digits appropriate to their calculation history?

A library which does this is MPFR by Paul Zimmermann (http://www.mpfr.org/).
So far no CL implementation uses it as its floating-point number
implementation.

          Bruno
From: Greg Menke
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <m3oehmsqk5.fsf@europa.pienet>
Pascal Bourguignon <····@mouse-potato.com> writes:

> Cameron MacKinnon <··········@clearspot.net> writes:
> 
> > Paul F. Dietz wrote:
> 
> > We've come a long way since the typical user of computers looked at
> > the numbers scrolling out of his LA-78 and thought "Well, I couldn't
> > get any closer with my slide rule."
> 
> The point is that the 3 significant digits of a slide rule are enough
> to go to the Moon and  to do anything else in the physical world.
> 


Try telling that to a machinist.

Gregm
From: Svein Ove Aas
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <co2q72$kl3$1@services.kq.no>
Cameron MacKinnon wrote:

> Paul F. Dietz wrote:
>> Majorinc wrote:
>> 
>>> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense.
>> 
>> 
>> Are you saying that that is *not* caused by the finite precision
>> of floating point arithmetic?
> 
> It's worth pointing out that the original poster didn't ask for infinite
> precision arithmetic.
> 
> My $5 desk calculator gets the right answer, as would most nine year
> olds. Even my C compiler seems to. This is one of those instances where
> a computer science education can convince you that the answer that
> everyone thinks is wrong is actually "right" given a set of underlying
> assumptions and trade offs.
> 
> 32 bits as the default size of a float is probably a poor choice for
> general purpose computing in 2004*. Computer scientists should recognize
> that the majority of people who program computers today are not computer
> scientists, nor are the people who analyze numbers trained numerical
> analysts. If this requires smarter programming environments that can do
> elementary numerical analysis on algorithms in order to offer
> suggestions to the coder, then so be it.
> 
If that's a problem for you, then you can rebind *read-default-float-format*
to 'double-float, or whatever strikes your fancy, and it will go away.

In fact, you can setf it, dump core, and use that core as the default. It's
what I did, and it doesn't appear to break anything.
From: Fred Gilham
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <u7is7vxm2w.fsf@snapdragon.csl.sri.com>
Majorinc, Kazimir wrote:

> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense. 

Remember that 0.9 and 0.5 get read into memory in a binary format.
How would you, personally, write 0.9 and 0.5 in binary?

-- 
Fred Gilham                                       ······@csl.sri.com
"[The democratic political process] is a travesty of a mockery of a
sham of a mockery of a travesty of two mockeries of a sham."
                                                 -- Woody Allen
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <874qjfkz7b.fsf@david-steuber.com>
Majorinc, Kazimir <·····@email.address> writes:

> I do not buy "floating point" excuse for 0.9-0.5=0.39998 nonsense. 

CL-USER> (- 0.9 0.5)
0.39999998
CL-USER> (- 0.9d0 0.5d0)
0.4D0
CL-USER> (- 9/10 5/10)
2/5

I'm not sure what else it can be.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Christopher C. Stacy
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <ur7mln0bv.fsf@news.dtpq.com>
"Tim Bradshaw" <··········@tfeb.org> writes:

> Jens Axel S�gaard wrote:
> > If you want the full explanation see the classic article "What Every
> > Computer Scientist Should Know About Floating-Point Arithmetic":
> >
> >      <http://docs.sun.com/source/806-3568/ncg_goldberg.html>
> >
> 
> This question (or a variant of it) seems to come up pretty frequently
> in CLL.  I can really only see a couple of reasons why:
> 
> 1. People attracted to lisp are stupid or uneducated;
> 2. It's possible to become `educated' in computer science while
> having no understanding at all of floating point arithmetic.
> 
> I don't believe (1), so I'm left with (2).  Is this really the case?
> If it is, then it confirms my view that most people designing computer
> science curricula should be melted down for glue.

As far as I can tell, most schools don't even explain what binary
numbers are, let alone what floating point is.  They are too busy
trying to teach where to place the braces and semi-colons.
From: Trent Buck
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <20041123070440.1a2b4898@harpo.marx>
Quoth Tim Bradshaw on or about 2004-11-22:
> 1. People attracted to lisp are stupid or uneducated;
> 2. It's possible to become `educated' in computer science while
> having no understanding at all of floating point arithmetic.
> 
> I don't believe (1), so I'm left with (2).  Is this really the case?
> If it is, then it confirms my view that most people designing computer
> science curricula should be melted down for glue.

It depends on whether your school is clued.  My current Bit-of-Paper
Vendor introduces MIPS assembly to first-year students to force them to
learn about registers, stacks, two's complement, IEEE754, etc.

Of course, it's possible to be ignorant through keenness -- the earliest
I can `officially' learn Lisp is sixth semester, which is about nine
months away.

-trent
From: Thomas F. Burdick
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <xcvpt24labf.fsf@conquest.OCF.Berkeley.EDU>
"Tim Bradshaw" <··········@tfeb.org> writes:

> Jens Axel S=F8gaard wrote:
> > If you want the full explanation see the classic article "What Every
> > Computer Scientist Should Know About Floating-Point Arithmetic":
> >
> >      <http://docs.sun.com/source/806-3568/ncg_goldberg.html>
> >
> 
> This question (or a variant of it) seems to come up pretty frequently
> in CLL.  I can really only see a couple of reasons why:
> 
> 1=2E People attracted to lisp are stupid or uneducated;
> 2=2E It's possible to become `educated' in computer science while
> having no understanding at all of floating point arithmetic.
> 
> I don't believe (1), so I'm left with (2).  Is this really the case?
> If it is, then it confirms my view that most people designing computer
> science curricula should be melted down for glue.

Understanding floating point arithmetic just isn't stressed enough.
Autodidacts might be more likely than folks with CS degrees to
actually understand it.  Even in good schools where people are taught
fp, unless someone is interested in the more number-crunching end of
things, it'll have been a test they took once, and then started fading
from memory.
From: Surendra Singhi
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <cnvu2d$grd$1@news.asu.edu>
Thomas F. Burdick wrote:
> "Tim Bradshaw" <··········@tfeb.org> writes:
> 
> 
>>Jens Axel S=F8gaard wrote:
>>
>>>If you want the full explanation see the classic article "What Every
>>>Computer Scientist Should Know About Floating-Point Arithmetic":
>>>
>>>     <http://docs.sun.com/source/806-3568/ncg_goldberg.html>
>>>
>>This question (or a variant of it) seems to come up pretty frequently
>>in CLL.  I can really only see a couple of reasons why:
>>
>>1=2E People attracted to lisp are stupid or uneducated;
>>2=2E It's possible to become `educated' in computer science while
>>having no understanding at all of floating point arithmetic.
>>
>>I don't believe (1), so I'm left with (2).  Is this really the case?
>>If it is, then it confirms my view that most people designing computer
>>science curricula should be melted down for glue.
> 
> 
> Understanding floating point arithmetic just isn't stressed enough.
> Autodidacts might be more likely than folks with CS degrees to
> actually understand it.  Even in good schools where people are taught
> fp, unless someone is interested in the more number-crunching end of
> things, it'll have been a test they took once, and then started fading
> from memory.

So true.
-- 
Surendra Singhi

www.public.asu.edu/~sksinghi
From: Gorbag
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <FsLod.2$Fk2.1@bos-service2.ext.ray.com>
"Thomas F. Burdick" <···@conquest.OCF.Berkeley.EDU> wrote in message
····················@conquest.OCF.Berkeley.EDU...
> "Tim Bradshaw" <··········@tfeb.org> writes:
>
> > Jens Axel S=F8gaard wrote:
> > > If you want the full explanation see the classic article "What Every
> > > Computer Scientist Should Know About Floating-Point Arithmetic":
> > >
> > >      <http://docs.sun.com/source/806-3568/ncg_goldberg.html>
> > >
> >
> > This question (or a variant of it) seems to come up pretty frequently
> > in CLL.  I can really only see a couple of reasons why:
> >
> > 1=2E People attracted to lisp are stupid or uneducated;
> > 2=2E It's possible to become `educated' in computer science while
> > having no understanding at all of floating point arithmetic.
> >
> > I don't believe (1), so I'm left with (2).  Is this really the case?
> > If it is, then it confirms my view that most people designing computer
> > science curricula should be melted down for glue.
>
> Understanding floating point arithmetic just isn't stressed enough.
> Autodidacts might be more likely than folks with CS degrees to
> actually understand it.  Even in good schools where people are taught
> fp, unless someone is interested in the more number-crunching end of
> things, it'll have been a test they took once, and then started fading
> from memory.

And the reason is it just isn't that useful a bit of trivia to know,
otherwise it'd be in working memory, right?

I can count using one thumb the number of times I've actually needed to know
something about FP in a career spanning several decades. NP completeness has
been more useful; complexity theory has been more useful (well, maybe not
the Gap Theorem); Undecidablility, Higher order logic, Possible Worlds,
Russell's Paradox, all pretty obscure corners and all much more useful to me
in my career than anything at all about FP. So why exactly does this "need"
to be in CS curricula if most of the above are not? Sure, everyone should
know something about FP, but I hardly think much more than it's an
approximation.
From: Thomas A. Russ
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <ymillcr6pks.fsf@sevak.isi.edu>
"Gorbag" <······@invalid.acct> writes:
> So why exactly does this "need"
> to be in CS curricula if most of the above are not? Sure, everyone should
> know something about FP, but I hardly think much more than it's an
> approximation.

I suppose the desire for SOME mention of this is needed to stem the
large number of questions or "bug reports" concerning floating point
arithmetic that one often finds on bulletin boards.  It occurs with
ridiculous frequency, for example, on comp.lang.javascript, but perhaps
that is because many javascript users are not formally CS trained.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Pascal Bourguignon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87is7wmlrs.fsf@thalassa.informatimago.com>
···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> Understanding floating point arithmetic just isn't stressed enough.
> Autodidacts might be more likely than folks with CS degrees to
> actually understand it.  Even in good schools where people are taught
> fp, unless someone is interested in the more number-crunching end of
> things, it'll have been a test they took once, and then started fading
> from memory.

There's only one way: implement floating point routines on a 8-bit
microprocessor (6502, 68xx, z80).


(Well there's another way, read a math book about computer
arithmetics, but if they don't know fp in the first place, they won't
be able to read it, so back to the 6502).

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The world will now reboot; don't bother saving your artefacts.
From: Barry Margolin
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <barmar-8CAEF4.15292229112004@comcast.dca.giganews.com>
In article <························@f14g2000cwb.googlegroups.com>,
 "Tim Bradshaw" <··········@tfeb.org> wrote:

> Jens Axel S�gaard wrote:
> > If you want the full explanation see the classic article "What Every
> > Computer Scientist Should Know About Floating-Point Arithmetic":
> >
> >      <http://docs.sun.com/source/806-3568/ncg goldberg.html>
> >
> 
> This question (or a variant of it) seems to come up pretty frequently
> in CLL.  I can really only see a couple of reasons why:
> 
> 1. People attracted to lisp are stupid or uneducated;
> 2. It's possible to become `educated' in computer science while
> having no understanding at all of floating point arithmetic.
> 
> I don't believe (1), so I'm left with (2).  Is this really the case?
> If it is, then it confirms my view that most people designing computer
> science curricula should be melted down for glue.

Many people who program computers don't have any formal computer science 
education at all.  Lots of them learn it on their own.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Christopher C. Stacy
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <uekicti55.fsf@news.dtpq.com>
Barry Margolin <······@alum.mit.edu> writes:

> In article <························@f14g2000cwb.googlegroups.com>,
>  "Tim Bradshaw" <··········@tfeb.org> wrote:
> 
> > Jens Axel S�gaard wrote:
> > > If you want the full explanation see the classic article "What Every
> > > Computer Scientist Should Know About Floating-Point Arithmetic":
> > >
> > >      <http://docs.sun.com/source/806-3568/ncg goldberg.html>
> > >
> > 
> > This question (or a variant of it) seems to come up pretty frequently
> > in CLL.  I can really only see a couple of reasons why:
> > 
> > 1. People attracted to lisp are stupid or uneducated;
> > 2. It's possible to become `educated' in computer science while
> > having no understanding at all of floating point arithmetic.
> > 
> > I don't believe (1), so I'm left with (2).  Is this really the case?
> > If it is, then it confirms my view that most people designing computer
> > science curricula should be melted down for glue.
> 
> Many people who program computers don't have any formal computer science 
> education at all.  Lots of them learn it on their own.

That was the case back in the 1970s and before, and I think
in the 1980s, also; but is it still true?
From: Barry Margolin
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <barmar-013132.19464429112004@comcast.dca.giganews.com>
In article <·············@news.dtpq.com>,
 ······@news.dtpq.com (Christopher C. Stacy) wrote:

> Barry Margolin <······@alum.mit.edu> writes:
> 
> > In article <························@f14g2000cwb.googlegroups.com>,
> >  "Tim Bradshaw" <··········@tfeb.org> wrote:
> > 
> > > Jens Axel S�gaard wrote:
> > > > If you want the full explanation see the classic article "What Every
> > > > Computer Scientist Should Know About Floating-Point Arithmetic":
> > > >
> > > >      <http://docs.sun.com/source/806-3568/ncg goldberg.html>
> > > >
> > > 
> > > This question (or a variant of it) seems to come up pretty frequently
> > > in CLL.  I can really only see a couple of reasons why:
> > > 
> > > 1. People attracted to lisp are stupid or uneducated;
> > > 2. It's possible to become `educated' in computer science while
> > > having no understanding at all of floating point arithmetic.
> > > 
> > > I don't believe (1), so I'm left with (2).  Is this really the case?
> > > If it is, then it confirms my view that most people designing computer
> > > science curricula should be melted down for glue.
> > 
> > Many people who program computers don't have any formal computer science 
> > education at all.  Lots of them learn it on their own.
> 
> That was the case back in the 1970s and before, and I think
> in the 1980s, also; but is it still true?

I expect it's even *more* true now.  In the 70's, it was relatively 
difficult to get access to computers if you weren't in a CS class.  Now 
almost everyone has home computers, so it's easy to get started 
programming on your own.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Christopher C. Stacy
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <u1xect6s7.fsf@news.dtpq.com>
Barry Margolin <······@alum.mit.edu> writes:

> In article <·············@news.dtpq.com>,
>  ······@news.dtpq.com (Christopher C. Stacy) wrote:
> 
> > Barry Margolin <······@alum.mit.edu> writes:
> > 
> > > In article <························@f14g2000cwb.googlegroups.com>,
> > >  "Tim Bradshaw" <··········@tfeb.org> wrote:
> > > 
> > > > Jens Axel S�gaard wrote:
> > > > > If you want the full explanation see the classic article "What Every
> > > > > Computer Scientist Should Know About Floating-Point Arithmetic":
> > > > >
> > > > >      <http://docs.sun.com/source/806-3568/ncg goldberg.html>
> > > > >
> > > > 
> > > > This question (or a variant of it) seems to come up pretty frequently
> > > > in CLL.  I can really only see a couple of reasons why:
> > > > 
> > > > 1. People attracted to lisp are stupid or uneducated;
> > > > 2. It's possible to become `educated' in computer science while
> > > > having no understanding at all of floating point arithmetic.
> > > > 
> > > > I don't believe (1), so I'm left with (2).  Is this really the case?
> > > > If it is, then it confirms my view that most people designing computer
> > > > science curricula should be melted down for glue.
> > > 
> > > Many people who program computers don't have any formal computer science 
> > > education at all.  Lots of them learn it on their own.
> > 
> > That was the case back in the 1970s and before, and I think
> > in the 1980s, also; but is it still true?
> 
> I expect it's even *more* true now.  In the 70's, it was relatively 
> difficult to get access to computers if you weren't in a CS class.  Now 
> almost everyone has home computers, so it's easy to get started 
> programming on your own.

But is that who's getting hired?
From: Barry Margolin
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <barmar-4D1FBB.00331001122004@comcast.dca.giganews.com>
In article <·············@news.dtpq.com>,
 ······@news.dtpq.com (Christopher C. Stacy) wrote:

> Barry Margolin <······@alum.mit.edu> writes:
> 
> > In article <·············@news.dtpq.com>,
> >  ······@news.dtpq.com (Christopher C. Stacy) wrote:
> > 
> > > Barry Margolin <······@alum.mit.edu> writes:
> > > 
> > > > In article <························@f14g2000cwb.googlegroups.com>,
> > > >  "Tim Bradshaw" <··········@tfeb.org> wrote:
> > > > 
> > > > > Jens Axel S�gaard wrote:
> > > > > > If you want the full explanation see the classic article "What 
> > > > > > Every
> > > > > > Computer Scientist Should Know About Floating-Point Arithmetic":
> > > > > >
> > > > > >      <http://docs.sun.com/source/806-3568/ncg goldberg.html>
> > > > > >
> > > > > 
> > > > > This question (or a variant of it) seems to come up pretty frequently
> > > > > in CLL.  I can really only see a couple of reasons why:
> > > > > 
> > > > > 1. People attracted to lisp are stupid or uneducated;
> > > > > 2. It's possible to become `educated' in computer science while
> > > > > having no understanding at all of floating point arithmetic.
> > > > > 
> > > > > I don't believe (1), so I'm left with (2).  Is this really the case?
> > > > > If it is, then it confirms my view that most people designing 
> > > > > computer
> > > > > science curricula should be melted down for glue.
> > > > 
> > > > Many people who program computers don't have any formal computer 
> > > > science 
> > > > education at all.  Lots of them learn it on their own.
> > > 
> > > That was the case back in the 1970s and before, and I think
> > > in the 1980s, also; but is it still true?
> > 
> > I expect it's even *more* true now.  In the 70's, it was relatively 
> > difficult to get access to computers if you weren't in a CS class.  Now 
> > almost everyone has home computers, so it's easy to get started 
> > programming on your own.
> 
> But is that who's getting hired?

People end up programming via many circuitous routes.

For instance, when I was at Thinking Machines, I was one the few system 
administrators who had any formal CS education.  The lead sysadmin for a 
while was formerly an administrative assistant (aka secretary), another 
one was a guy we had hired to do menial tasks like pulling cables.  Over 
time, as they learned more about Unix, they started writing scripts to 
automate common tasks or solve one-time problems.  Voila! another 
programmer who never took a programming class.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Peter Seibel
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <m3oehg8f8n.fsf@javamonkey.com>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Barry Margolin <······@alum.mit.edu> writes:
>
>> Many people who program computers don't have any formal computer
>> science education at all. Lots of them learn it on their own.
>
> That was the case back in the 1970s and before, and I think in the
> 1980s, also; but is it still true?

Sure. Lot's of 30-something programmers I know have undergrad degrees
in Physics, Electrical Engineering, and Math. Personally, I wandered
in from the English department. And several of the best programmers I
know never went to college.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Peter Seibel
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <m3k6s48ebk.fsf@javamonkey.com>
Peter Seibel <·····@javamonkey.com> writes:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
>
>> Barry Margolin <······@alum.mit.edu> writes:
>>
>>> Many people who program computers don't have any formal computer
>>> science education at all. Lots of them learn it on their own.
>>
>> That was the case back in the 1970s and before, and I think in the
>> 1980s, also; but is it still true?
>
> Sure. Lot's of 30-something programmers I know have undergrad degrees
> in Physics, Electrical Engineering, and Math. Personally, I wandered
> in from the English department. And several of the best programmers I
> know never went to college.

Of course it'd be the post in which I mention my English degree that's
got a bonehead typo in the second word. s/Lot's/Lots/. I'm sure
there's more where that one came from.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: William Bland
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pan.2004.11.29.22.59.42.339920@abstractnonsense.com>
On Mon, 29 Nov 2004 22:25:29 +0000, Peter Seibel wrote:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
>> Barry Margolin <······@alum.mit.edu> writes:
>>
>>> Many people who program computers don't have any formal computer
>>> science education at all. Lots of them learn it on their own.
>>
>> That was the case back in the 1970s and before, and I think in the
>> 1980s, also; but is it still true?
> 
> Sure. Lot's of 30-something programmers I know have undergrad degrees
> in Physics, Electrical Engineering, and Math. Personally, I wandered
> in from the English department. And several of the best programmers I
> know never went to college.
> 
> -Peter

I have a PhD in pure mathematics and taught myself to program computers in
my spare time.  My younger brother dropped out of university after one
year of undergraduate mathematics, having also taught himself programming
in his spare time.  Both of us find it fairly easy to out-program people
with formal CS backgrounds.  When I first decided not to continue along
the academic route, I briefly regretted not taking any CS courses... until
I started working with people who had.  Oh, and we are both in our 20s.

Cheers,
	Bill.
From: Barry Margolin
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <barmar-EB609B.00353501122004@comcast.dca.giganews.com>
In article <················@mjolner.upc.no>,
 "John Thingstad" <··············@chello.no> wrote:

> Floating point arithmetric is covered by taking a beginners course
> in numerical analysis.
> However, at some universities, it is possible to get a computer
> science degree without such a cource.
> Still, having taken one, I can recomend it.

I got my CS degree at MIT, arguably one of the best colleges for this 
subject.  There were math requirements for the degree, but I don't think 
numerical analysis was in there, IIRC.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: John Thingstad
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <opsiblu7w7pqzri1@mjolner.upc.no>
On Wed, 01 Dec 2004 00:35:35 -0500, Barry Margolin <······@alum.mit.edu>  
wrote:

> In article <················@mjolner.upc.no>,
>  "John Thingstad" <··············@chello.no> wrote:
>
>> Floating point arithmetric is covered by taking a beginners course
>> in numerical analysis.
>> However, at some universities, it is possible to get a computer
>> science degree without such a cource.
>> Still, having taken one, I can recomend it.
>
> I got my CS degree at MIT, arguably one of the best colleges for this
> subject.  There were math requirements for the degree, but I don't think
> numerical analysis was in there, IIRC.
>

Yeah. I went to the University of Oslo.
It wasn't required there either.
Still I can't help but feel he has a point.
Perhaps it should be required.
A lot of programmers will be let loose on code that does numerical  
computations
without any proper training. This can mean making programmes that give
inaccurate or even erroneous results.


-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <4LOdnVQuO6hwMzDcRVn-rQ@golden.net>
John Thingstad wrote:
> On Wed, 01 Dec 2004 00:35:35 -0500, Barry Margolin 
> <······@alum.mit.edu>  wrote:
> 
>> In article <················@mjolner.upc.no>,
>>  "John Thingstad" <··············@chello.no> wrote:
>>
>>> Floating point arithmetric is covered by taking a beginners course
>>> in numerical analysis.
>>> However, at some universities, it is possible to get a computer
>>> science degree without such a cource.
>>> Still, having taken one, I can recomend it.
>>
>>
>> I got my CS degree at MIT, arguably one of the best colleges for this
>> subject.  There were math requirements for the degree, but I don't think
>> numerical analysis was in there, IIRC.
>>
> 
> Yeah. I went to the University of Oslo.
> It wasn't required there either.
> Still I can't help but feel he has a point.
> Perhaps it should be required.
> A lot of programmers will be let loose on code that does numerical  
> computations
> without any proper training. This can mean making programmes that give
> inaccurate or even erroneous results.

I can't agree with the general consensus that the problem here is a lack 
of user education. Babbage help us if the goal of a CS education is to 
teach Johnny why HAL can't add.

Yes, there are many instances where coders need to be aware of the 
pitfalls of finite precision computation. Overflow, underflow and 
cumulative loss of precision can all produce incorrect answers. At 
minimum, those involved with scientific computation ought to be taught 
what they don't know, so they will recognize possible trouble spots and 
call for expert assistance.

Our OP wasn't bitten by overflow, underflow or cumulative loss of 
precision. As has been noted, his query is hardly unique on c.l.l, or 
elsewhere. When one person complains about something, it's an anomaly or 
  a clue to improve the documentation. When a multitude complain, it's 
the market sending a signal. No matter how well we explain the cause, 
the user is going to hear some combination of "the computer can't add" 
and "that answer is close enough for you, luser."

If the computer really couldn't be programmed to add, we'd have our 
educational work cut out for us. But the plain fact is, these people are 
being bitten by representational issues in a number system designed with 
another user group in mind - the 'hard' scientists who had the budget 
for scientific computing back in the day. These days, everyone computes, 
and most of the math that most computers do isn't for scientific 
computation.  Users have expressed a clear and continuing preference to 
get correct answers on short chains of operations with operands 
typically in the range +/- 1e-5 to 1e10.

We ought to recognize that IEEE FP was a more appropriate compromise for 
the typical uses of computers then than now. If users of desktop 
computers were given a choice today, one vote per CPU, I'm certain that 
they would overwhelmingly vote for a representation more attuned to the 
needs of the modern everyman, and less to the needs of Mayan calendar 
makers.

I realize that, in the next five years, binary floating point is about 
as likely to be displaced as ternary computers are to become popular. 
But calling users stupid or uneducated for expecting teenager-level 
arithmetic from their computers ought to be discouraged. Our computers 
are our tools; if they don't do what naive users reasonably expect, it's 
time to fix the tools.
From: John Thingstad
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <opsibw9ajrpqzri1@mjolner.upc.no>
On Wed, 01 Dec 2004 06:45:48 -0500, Cameron MacKinnon  
<··········@clearspot.net> wrote:

> I realize that, in the next five years, binary floating point is about  
> as likely to be displaced as ternary computers are to become popular.  
> But calling users stupid or uneducated for expecting teenager-level  
> arithmetic from their computers ought to be discouraged. Our computers  
> are our tools; if they don't do what naive users reasonably expect, it's  
> time to fix the tools.

Oh, but we do.
Mathematica is a excellent example.
But if you want to understand the undelying principles you will sooner
or later have to relate to the fact that computers have finite precision.
If you want to make numerical computations you still have to compute
the error made and make sure it is within the specified limit.
If the numbers are to high you need to do the computation with the  
logarithm.
You need to reorder the calculation to reduce roundoff error by minimising
the number of multiplications and so forth.


-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Joseph Oswald
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <e86bd0ec.0412011141.156c32b3@posting.google.com>
"John Thingstad" <··············@chello.no> wrote in message news:<················@mjolner.upc.no>...
> On Wed, 01 Dec 2004 06:45:48 -0500, Cameron MacKinnon  
> <··········@clearspot.net> wrote:
> 
> > I realize that, in the next five years, binary floating point is about  
> > as likely to be displaced as ternary computers are to become popular.  
> > But calling users stupid or uneducated for expecting teenager-level  
> > arithmetic from their computers ought to be discouraged. Our computers  
> > are our tools; if they don't do what naive users reasonably expect, it's  
> > time to fix the tools.
> 
> Oh, but we do.
> Mathematica is a excellent example.

I'm not sure what you mean here. Mathematica took an unconventional
approach to issues of precision, which is by no means foolproof. It
just has different traps.

Richard Fateman and other experts have described these much better
than I could. But, speaking in a Lisp forum, it seems similar to
Wolfram's uncoventional approach to Lisp-like functional programming.

Addressing Cameron's original goal: all tools have imperfections, and
part of using tools responsibly is to understand their limitations.
The underlying mathematics of real numbers is about as far a thing as
possible from what computers can compute. It is also far from human
intuition. Humans change context almost reflexively in dealing with a
concept like "0.9"; expecting computers to follow along with human
intuition is beyond a pipe dream.

As a simple example, consider taking the reciprocal of 0.9; most
people would say "10/9" and, if particularly careful, would try to
remember that the original number had only a certain number of
significant digits, which has to be derived from context. (The
convention that 0.9 has one significant digit is just a convention.
Outside of Chemistry/Physics I, Chapter 1, it is by no means
universally followed.)

You really expect a computer to follow along?
From: John Thingstad
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <opsicj1cxnpqzri1@mjolner.upc.no>
On 1 Dec 2004 11:41:09 -0800, Joseph Oswald <··············@hotmail.com>  
wrote:

> "John Thingstad" <··············@chello.no> wrote in message  
> news:<················@mjolner.upc.no>...
>> On Wed, 01 Dec 2004 06:45:48 -0500, Cameron MacKinnon
>> <··········@clearspot.net> wrote:
>>
>> > I realize that, in the next five years, binary floating point is about
>> > as likely to be displaced as ternary computers are to become popular.
>> > But calling users stupid or uneducated for expecting teenager-level
>> > arithmetic from their computers ought to be discouraged. Our computers
>> > are our tools; if they don't do what naive users reasonably expect,  
>> it's
>> > time to fix the tools.
>>
>> Oh, but we do.
>> Mathematica is a excellent example.
>
> I'm not sure what you mean here. Mathematica took an unconventional
> approach to issues of precision, which is by no means foolproof. It
> just has different traps.
>

As I said earlier Computers have finite precision.

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <xvWdnWsnAeBWhjPcRVn-pw@golden.net>
John Thingstad wrote:
> But if you want to understand the undelying principles you will sooner
> or later have to relate to the fact that computers have finite precision.

Show me an example of a newbie's post that required infinite precision. 
There is no underlying "principle" of CS that says computers can't add, 
and if young students can be taught how to subtract 0.11111... from 
0.99999... without using an infinite number of pencils, I see no reason 
why it would require an infinite number of electrons.

> If you want to make numerical computations you still have to compute
> the error made and make sure it is within the specified limit.
> If the numbers are to high you need to do the computation with the  
> logarithm.
> You need to reorder the calculation to reduce roundoff error by minimising
> the number of multiplications and so forth.

This is all true as far as it goes (though some of these things look 
suspiciously easy to automate), but it misses my point: The VAST 
majority of newbies aren't caught out by precision issues while 
performing long chains of sophisticated numerics, nor will they ever in 
their programming careers work on such problems. They're doing a small 
number of operations (in the OP's case, *one*) on numbers with few 
significant digits (in the OP's case, *one*) and an exponent close to 
zero. Their computers are spending 95% of CPU driving the GUI, and there 
is no good reason why their expectation of accurate answers can't be 
fulfilled.
From: Pascal Bourguignon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87zn0xc6r4.fsf@thalassa.informatimago.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> John Thingstad wrote:
> > But if you want to understand the undelying principles you will sooner
> > or later have to relate to the fact that computers have finite precision.
> 
> Show me an example of a newbie's post that required infinite
> precision. There is no underlying "principle" of CS that says
> computers can't add, and if young students can be taught how to
> subtract 0.11111... from 0.99999... without using an infinite number
> of pencils, I see no reason why it would require an infinite number of
> electrons.

No, but try this rather: 3.14159265..... - 0.11111111.... = ?
without an infinite number of pencils.


> This is all true as far as it goes (though some of these things look
> suspiciously easy to automate), but it misses my point: The VAST
> majority of newbies aren't caught out by precision issues while
> performing long chains of sophisticated numerics, nor will they ever
> in their programming careers work on such problems. They're doing a
> small number of operations (in the OP's case, *one*) on numbers with
> few significant digits (in the OP's case, *one*) and an exponent close
> to zero. Their computers are spending 95% of CPU driving the GUI, and
> there is no good reason why their expectation of accurate answers
> can't be fulfilled.

Why do you think lisp invented rational numbers?
Try: (- 1 1/9) instead of 0.9999999... - 0.1111111....

This is still a problem of teaching users to use 1/9 instead of
0.11111 and 4/10 instead of 0.4!


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The world will now reboot; don't bother saving your artefacts.
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <x8adnbc21PxhuDPcRVn-qQ@golden.net>
Cameron MacKinnon wrote:
> John Thingstad wrote:

>> Yeah. I went to the University of Oslo.

> Babbage help us if the goal of a CS education is to teach Johnny why HAL can't add.

I should clarify here that the 'Johnny' I was referring to is the 
proverbial one that education critics refer to when they say "Johnny 
can't read", not John Thingstad. Apologies if my poor choice of 
stand-ins appeared insulting.
From: Hartmann Schaffer
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <GWqrd.16342$Su4.21398@newscontent-01.sprint.ca>
Cameron MacKinnon wrote:
> ...
  I can't agree with the general consensus that the problem here is a lack
> of user education. Babbage help us if the goal of a CS education is to 
> teach Johnny why HAL can't add.
> 
> Yes, there are many instances where coders need to be aware of the 
> pitfalls of finite precision computation. Overflow, underflow and 
> cumulative loss of precision can all produce incorrect answers. At 
> minimum, those involved with scientific computation ought to be taught 
> what they don't know, so they will recognize possible trouble spots and 
> call for expert assistance.

i think you don't get what the problem is.  it is a plain fact that 
there are some limitations to floating point, some of them showing up 
much earlier than the cases you provided above.  one of the first to 
show up is that not all numbers can represented exactly.  now put 
yourself into the mint of a computer that has to process a floating 
point instruction.  it sees the opcode and one or two numbers.  it has 
no idea about how the number was derived.  if one of the numbers happens 
to be very close to some other number, it has no way of knowing whether 
the difference is because of a representation problem, any of the 
problems you mentioned above, or whether it happens to be the exact 
representation of a number that just happens to be very close to some 
other number.  what do you suggest the computer do?  btw, you'll end up 
with similar problems when you do arithmetic in decimal.  simply do some 
calculations where you divide by 3 or 7, or use numbers like pi or e.

the fact is that if you use some tool, you better be aware of the tool's 
limitations.  if you aren't, something will bite you eventually

btw, to some extent i agree with you.  some applications, e.g. math 
drills for elementary school children, should take care of problems like 
that.  but not interfaces to programming languages, like e.g. the lisp repl.
> ...

hs
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <0NKdneqI_YnY_zPcRVn-2w@golden.net>
Hartmann Schaffer wrote:

> i think you don't get what the problem is.  it is a plain fact that 
> there are some limitations to floating point, some of them showing up 
> much earlier than the cases you provided above.  one of the first to 
> show up is that not all numbers can represented exactly.

Not all numbers can be represented exactly, eh? Could you quantify the 
problem for me? What percentage of non-integers (with n or fewer digits 
to the right of the point) that can be represented exactly in base ten 
cannot be represented exactly in IEEE FP?

Maybe this is just a basic truth, that not all fractional numbers 
exactly representable in one base are exactly representable in another. 
What percentage of numbers exactly representable in IEEE FP are not 
exactly representable in base ten?

> the fact is that if you use some tool, you better be aware of the tool's 
> limitations.  if you aren't, something will bite you eventually

On one level, this is certainly true. But at a higher level, large 
numbers of disfigured customers are indicative of tool design issues.

Here's something I suspect to be an unfortunate truth: The man in the 
street attributes more accuracy to numbers expressed as decimals than 
those expressed as fractions. This makes the Lisper's advice of "use 
rationals if you want accuracy, floats if you don't" counterintuitive.

> btw, to some extent i agree with you.  some applications, e.g. math 
> drills for elementary school children, should take care of problems like 
> that.  but not interfaces to programming languages, like e.g. the lisp 
> repl.

This is utterly absurd. You admit that our machines can be coaxed into 
producing correct answers, but think that only our children are worthy 
of the effort? Those kids are just going to get confused later on, when 
they reach the age of inaccuracy. Far better, I say, to chop two fingers 
off each kindergarten entrant and educate them in octal. If we can't 
adapt our machines to serve us, we must adapt ourselves to serve our 
machines.
From: Wade Humeniuk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <tDxrd.35007$VL6.10783@clgrps13>
Cameron MacKinnon wrote:
> Hartmann Schaffer wrote:
> 
>> i think you don't get what the problem is.  it is a plain fact that 
>> there are some limitations to floating point, some of them showing up 
>> much earlier than the cases you provided above.  one of the first to 
>> show up is that not all numbers can represented exactly.
> 
> 
> Not all numbers can be represented exactly, eh? Could you quantify the 
> problem for me? What percentage of non-integers (with n or fewer digits 
> to the right of the point) that can be represented exactly in base ten 
> cannot be represented exactly in IEEE FP?
> 

Even if one human readable decimal cannot be mapped exactly into IEEE FP, which
there obviously are, means that IEEE FP is not sufficient for exact
representations.

That would be uncountable-infinity-percent.  Real numbers cannot be mapped
to the integers.  There are uncountably more reals than integers.  IEEE FP is
only a finite subset of the integers even as the limit of n approaches infinity.
Even if one allowed unlimited sized IEEE-like FP, they will still only
be isomorphic to the integers as computer memory is countable.  This means
computers cannot in general represent real numbers, its impossible.  One has
to make compromises and decide what the max margin of imprecision you are
willing to have.  IEEE has already decided that in the majority of cases.

Wade
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <d4mdnQ5GkaF3JjPcRVn-sg@golden.net>
Wade Humeniuk wrote:
> Cameron MacKinnon wrote:
> 
>> Hartmann Schaffer wrote:
>> 
>>> i think you don't get what the problem is.  it is a plain fact
>>> that there are some limitations to floating point, some of them
>>> showing up much earlier than the cases you provided above.  one
>>> of the first to show up is that not all numbers can represented
>>> exactly.
>> 
>> 
>> 
>> Not all numbers can be represented exactly, eh? Could you quantify
>> the problem for me? What percentage of non-integers (with n or
>> fewer digits to the right of the point) that can be represented
>> exactly in base ten cannot be represented exactly in IEEE FP?
>> 
> 
> Even if one human readable decimal cannot be mapped exactly into IEEE
>  FP, which there obviously are, means that IEEE FP is not sufficient
> for exact representations.
> 
> That would be uncountable-infinity-percent.  Real numbers cannot be
> mapped to the integers.  There are uncountably more reals than
> integers.  IEEE FP is only a finite subset of the integers even as
> the limit of n approaches infinity. Even if one allowed unlimited
> sized IEEE-like FP, they will still only be isomorphic to the
> integers as computer memory is countable.  This means computers
> cannot in general represent real numbers, its impossible.  One has to
> make compromises and decide what the max margin of imprecision you
> are willing to have.  IEEE has already decided that in the majority
> of cases.
> 
> Wade

This is true, but I was hoping Hartmann would do the analysis for, say 
0<n<5. His statement that "not all numbers can [be] represented exactly" 
is the standard CS line on FP. While technically true, it's awfully 
misleading. Hearing it, a coder might suppose that some numbers can be 
represented exactly, maybe enough that he'd find a few of them over the 
course of his career. But a quick thought experiment shows that for any 
decimal number with a fractional part, if the last nonzero digit is not 
5 (90% of them), it isn't representable, and if the last digit is 5, 
(the other 10%), it probably isn't representable.

CS profs and texts would do their audiences a favour if they just listed 
the fractions that are exactly representable, or said "If you write down 
a random, non integer decimal number, the chance that it is 
representable are on par with the chance that it is tomorrow's winning 
lottery number. Every FP constant in your code is a bug."
From: Pascal Bourguignon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87u0r5c60g.fsf@thalassa.informatimago.com>
Cameron MacKinnon <··········@clearspot.net> writes:
> >> Not all numbers can be represented exactly, eh? Could you quantify
> >> the problem for me? What percentage of non-integers (with n or
> >> fewer digits to the right of the point) that can be represented
> >> exactly in base ten cannot be represented exactly in IEEE FP?
> This is true, but I was hoping Hartmann would do the analysis for, say
> 0<n<5. His statement that "not all numbers can [be] represented
> exactly" is the standard CS line on FP. While technically true, it's
> awfully misleading. Hearing it, a coder might suppose that some
> numbers can be represented exactly, maybe enough that he'd find a few
> of them over the course of his career. But a quick thought experiment
> shows that for any decimal number with a fractional part, if the last
> nonzero digit is not 5 (90% of them), it isn't representable, and if
> the last digit is 5, (the other 10%), it probably isn't representable.
> 
> CS profs and texts would do their audiences a favour if they just
> listed the fractions that are exactly representable, or said "If you
> write down a random, non integer decimal number, the chance that it is
> representable are on par with the chance that it is tomorrow's winning
> lottery number. Every FP constant in your code is a bug."

The percentage of non-integer real number that can be represented
exactly in a floating point notation is exactly 0%.

There are an infinite number of non-integer real numbers and a finite
number of representation, and the ratio of finite / infinite is
exactly 0.

So when you want to enter a real number in a computer it is ALWAYS wrong.


Now, if you'd ask about rational numbers, we could get some more
meaningful answer.


And CS profs and texts have no business in teaching this basic
mathematical fact that should have been learned ten years before CS.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The world will now reboot; don't bother saving your artefacts.
From: Svein Ove Aas
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <con1ql$805$1@services.kq.no>
Pascal Bourguignon wrote:

> There are an infinite number of non-integer real numbers and a finite
> number of representation, and the ratio of finite / infinite is
> exactly 0.
> 
> So when you want to enter a real number in a computer it is ALWAYS wrong.
> 
Wrong, unless you consider every FP number to be a range. (Which is a
legitimate POV, mind you.)

Finite/infinite is indeed 0, but that's a limit, and can't be used to assert
that the value of "finite" is 0. There *are* numbers that are exactly
representable in FP, and a very larger number of them, it's just that
they're slightly outnumbered.

> 
> Now, if you'd ask about rational numbers, we could get some more
> meaningful answer.
> 
That's aleph-0/aleph-1, which is still 0 even with infinite memory.
Irrational numbers just can't be represented in any uniform manner I know
of - part and parcel of the whole "uncountable" thing.

> 
> And CS profs and texts have no business in teaching this basic
> mathematical fact that should have been learned ten years before CS.
> 
> 
Assuming you go directly from high school to university... you may have
learned the difference between rational and irrational numbers, but you
certainly won't understand it.

I think that's a bit overly optimistic of you.
From: Wade Humeniuk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <MfHrd.37573$VL6.23538@clgrps13>
Cameron MacKinnon wrote:
> 
> 
> This is true, but I was hoping Hartmann would do the analysis for, say 
> 0<n<5. His statement that "not all numbers can [be] represented exactly" 
> is the standard CS line on FP. While technically true, it's awfully 
> misleading. Hearing it, a coder might suppose that some numbers can be 
> represented exactly, maybe enough that he'd find a few of them over the 
> course of his career. But a quick thought experiment shows that for any 
> decimal number with a fractional part, if the last nonzero digit is not 
> 5 (90% of them), it isn't representable, and if the last digit is 5, 
> (the other 10%), it probably isn't representable.
> 

Running a quick check by coding up test:

(I think this code is correct)


(defun determine-exact-floats (n)
   "Determine the number of floats that are
    exact representations of decimal fractions.
    n is the number of decimal digits of precision."
   (loop with max-integer = (expt 10 n)
         for integer from 0 below (expt 10 n)
         for exact-rational = (/ integer max-integer)
         when (= exact-rational (rational (float exact-rational)))
         sum 1 into number-exacts and
         collect (format nil ".~V,'0D" n integer) into exacts
         finally
         (return (values number-exacts max-integer
                         (format nil "~F%" (* (/ number-exacts max-integer) 100))
                         exacts))))

Running it we get

CL-USER 65 > (determine-exact-floats 5)
32
100000
"0.032%"
(".00000" ".03125" ".06250" ".09375" ".12500" ".15625" ".18750" ".21875" ".25000" ".28125" 
".31250" ".34375" ".37500" ".40625" ".43750" ".46875" ".50000" ".53125" ".56250" ".59375" 
".62500" ".65625" ".68750" ".71875" ".75000" ".78125" ".81250" ".84375" ".87500" ".90625" 
".93750" ".96875")

CL-USER 66 > (determine-exact-floats 6)
64
1000000
"0.0064%"
(".000000" ".015625" ".031250" ".046875" ".062500" ".078125" ".093750" ".109375" ".125000" 
".140625" ".156250" ".171875" ".187500" ".203125" ".218750" ".234375" ".250000" ".265625" 
".281250" ".296875" ".312500" ".328125" ".343750" ".359375" ".375000" ".390625" ".406250" 
".421875" ".437500" ".453125" ".468750" ".484375" ".500000" ".515625" ".531250" ".546875" 
".562500" ".578125" ".593750" ".609375" ".625000" ".640625" ".656250" ".671875" ".687500" 
".703125" ".718750" ".734375" ".750000" ".765625" ".781250" ".796875" ".812500" ".828125" 
".843750" ".859375" ".875000" ".890625" ".906250" ".921875" ".937500" ".953125" ".968750" 
".984375")

CL-USER 67 >

Pretty small. isn't it??

Wade
From: Wade Humeniuk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <bvHrd.37574$VL6.31924@clgrps13>
I would conjecture based on those two runs the number
of exactly representable decimal fractions of n digits
is (probably regardless of FP precision)

(expt 2 n)

hmmm, interesting result.

Wade
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87fz2ov9zi.fsf@david-steuber.com>
Wade Humeniuk <····································@telus.net> writes:

> I would conjecture based on those two runs the number
> of exactly representable decimal fractions of n digits
> is (probably regardless of FP precision)
> 
> (expt 2 n)
> 
> hmmm, interesting result.

There is probably a proof for that, although I'm not at all sure how
to find it.  If computers used base 3 instead of binary, would the
answer have been (expt 3 n)?  If no, then what?

I guess you've covered the rationals.  Too bad the irrationals are
running the asylum.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Harald Hanche-Olsen
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pco1xe85nf8.fsf@shuttle.math.ntnu.no>
+ David Steuber <·····@david-steuber.com>:

| Wade Humeniuk <····································@telus.net> writes:
| 
| > I would conjecture based on those two runs the number
| > of exactly representable decimal fractions of n digits
| > is (probably regardless of FP precision)
| > 
| > (expt 2 n)
| > 
| > hmmm, interesting result.
| 
| There is probably a proof for that, although I'm not at all sure how
| to find it.

The n digit decimal fractions are all the numbers of the form x/10^n
where x is an integer in [0,10^n).  To be exactly representable is to
be on the form y/2^k.  This happens precisely when x is a multiple of
5^n (since 10 is the product of the two primes 2 and 5).  There will
be precisely 10^n/5^n=2^n such cases.

| If computers used base 3 instead of binary, would the
| answer have been (expt 3 n)?  If no, then what?

Then, being exactly representable means to be of the form y/3^k.
Unfortunately, no non-integer of this form has a finite decimal
expansion.

| I guess you've covered the rationals.  Too bad the irrationals are
| running the asylum.

Ah, but one can still do exact arithmetic on algebraic numbers.  Of
course, they are still only a countable family.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Wade Humeniuk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <HbPrd.225076$df2.32399@edtnps89>
Harald Hanche-Olsen wrote:

 >
 > The n digit decimal fractions are all the numbers of the form x/10^n
 > where x is an integer in [0,10^n).  To be exactly representable is to
 > be on the form y/2^k.  This happens precisely when x is a multiple of
 > 5^n (since 10 is the product of the two primes 2 and 5).  There will
 > be precisely 10^n/5^n=2^n such cases.
 >

Of course, thanks for the heads up,

Just to flesh it out a bit:

Definition: n-digit-decimal-fraction (nd for short) >> numbers of the form x/10^n where
             x is an I in the interval D(n)=[0,10^n)

Definition: k digit binary fraction (kb for short) >> numbers of the form y/2^k where
             y is an I in the interval B(k)=[0,2^k)

For nd=kb =>

1) x/10^n = y/2^k  =>
2) x = (10^n/2^k)y  , since x and y are integers =>
3) 10^n/2^k must be an integer
4) Since the prime decomposition of 10 is 2*5 then
5) 10^n/2^k = (2*5)^n/2^k = (2^n/2^k)5^n thus to be an integer
6) n must be >=k and (10^n/2^k) = 2^(n-k)5^n .
7) Since all sets of [0,2^(n-k)) are contained in the
    set [0,2^n) where n>=k, This is equivalent to k=n
    Thus,
7) x = 5^n * y where x E D(n) and y E B(n)

Thus the set where of nd where nd=kb is the set B(n) * 5^n of which
there are 2^n

Computational Check:


(defun determine-exact-floats (n)
   "Determine the number of floats that are
    exact representations of decimal fractions.
    n is the number of decimal digits of precision."
   (loop with max-integer = (expt 10 n)
         for integer from 0 below (expt 10 n)
         for exact-rational = (/ integer max-integer)
         when (= exact-rational (rational (float exact-rational)))
         sum 1 into number-exacts and
         collect (format nil ".~V,'0D" n integer) into exacts
         finally
         (return (values number-exacts max-integer
                         (format nil "~F%" (* (/ number-exacts max-integer) 100))
                         exacts))))

(defun set-decimal-fractions (n)
   (loop for i from 0 below (expt 2 n)
         collect (format nil ".~V,'0D" n (* i (expt  5 n)))))


CL-USER 9 > (multiple-value-bind (ne mi % exacts)
                 (determine-exact-floats 5)
               (equal exacts (set-decimal-fractions 5)))
T

CL-USER 10 > (multiple-value-bind (ne mi % exacts)
                 (determine-exact-floats 6)
               (equal exacts (set-decimal-fractions 6)))
T

CL-USER 11 > (set-decimal-fractions 5)
(".00000" ".03125" ".06250" ".09375" ".12500" ".15625" ".18750" ".21875" ".25000" ".28125" 
".31250" ".34375" ".37500" ".40625" ".43750" ".46875" ".50000" ".53125" ".56250" ".59375" 
".62500" ".65625" ".68750" ".71875" ".75000" ".78125" ".81250" ".84375" ".87500" ".90625" 
".93750" ".96875")

CL-USER 12 >

and

(defun decimal-fractions-percentage (n)
   (format t "~12F%~%" (/ (expt 2 n) (expt 10 n))))

For 10 digit decimal fractions representable by FP is

CL-USER 13 > (decimal-fractions-percentage 10)
0.0000001024%
NIL

CL-USER 14 >

Wade

That got my brain going again.


 > | If computers used base 3 instead of binary, would the
 > | answer have been (expt 3 n)?  If no, then what?
 >
 > Then, being exactly representable means to be of the form y/3^k.
 > Unfortunately, no non-integer of this form has a finite decimal
 > expansion.
 >
 > | I guess you've covered the rationals.  Too bad the irrationals are
 > | running the asylum.
 >
 > Ah, but one can still do exact arithmetic on algebraic numbers.  Of
 > course, they are still only a countable family.
 >
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87oehbcrvx.fsf@david-steuber.com>
Harald Hanche-Olsen <······@math.ntnu.no> writes:

> + David Steuber <·····@david-steuber.com>:
> 
> | I guess you've covered the rationals.  Too bad the irrationals are
> | running the asylum.
> 
> Ah, but one can still do exact arithmetic on algebraic numbers.  Of
> course, they are still only a countable family.

I'm probably getting in over my head here, but how does this work?  I
guess the most common example of an algebraic irratinal number would
be (sqrt 2).  Wouldn't you have to leave it in the form (sqrt 2) to do
exact arithmetic?  Once evaluated, you loose something.

The sigma series that most rapidly converge on pi use roots which I
have eschewed for not knowing how to handle them when I was playing
around with The Joy of Pi thread.

I also didn't know the set of algebraic numbers had cardinality.  I
thought it was more like the set of transcendental numbers or reals in
general.  I don't know if I could follow a proof that the set of
algebraic numbers is countable, but you are welcome to post one.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: William D Clinger
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <fb74251e.0412031254.5c3f63f8@posting.google.com>
David Steuber <·····@david-steuber.com> wrote:
> > Ah, but one can still do exact arithmetic on algebraic numbers.  Of
> > course, they are still only a countable family.
> 
> I'm probably getting in over my head here, but how does this work?  I
> guess the most common example of an algebraic irratinal number would
> be (sqrt 2).  Wouldn't you have to leave it in the form (sqrt 2) to do
> exact arithmetic?  Once evaluated, you loose something.

(sqrt 2) would evaluate to some exact representation of the
square root of 2.  Although the list (sqrt 2) is one possible
representation, it isn't the only one.

The algebraic numbers are closed under addition, subtraction,
multiplication, and division, so we really can do exact arithmetic
on them.

> I also didn't know the set of algebraic numbers had cardinality.

If you believe in the axiom of choice, and in any of the usual
formalizations of set theory, then your beliefs imply that every
set has a cardinality.  (I don't believe in the axiom of choice
myself, but I know a lot of people who say they do.)

> I
> thought it was more like the set of transcendental numbers or reals in
> general.  I don't know if I could follow a proof that the set of
> algebraic numbers is countable, but you are welcome to post one.

An algebraic number is a root of a polynomial with rational
coefficients.  There are only a countable number of polynomials
with rational coefficients, and each of those polynomials has
only a finite number of roots (bounded by the degree of the
polynomial).  Hence there are at most countably many algebraic
numbers.

It might be fun to talk about floating-point arithmetic in CL,
but I think Cameron needs to figure out whether he wants CL to
be a tool for educated programmers or a calculator for teenagers.
:)

Will
From: William Bland
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pan.2004.12.03.21.11.17.184969@abstractnonsense.com>
On Fri, 03 Dec 2004 12:54:54 -0800, William D Clinger wrote:

> David Steuber <·····@david-steuber.com> wrote:
>> I also didn't know the set of algebraic numbers had cardinality.
> 
> If you believe in the axiom of choice...

Of course it only make sense to "believe" in an axiom if you are a
Platonist (which I am not).

Cheers,
	Bill.
From: Svein Ove Aas
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <cor1c8$ass$1@services.kq.no>
William Bland wrote:

> On Fri, 03 Dec 2004 12:54:54 -0800, William D Clinger wrote:
> 
>> David Steuber <·····@david-steuber.com> wrote:
>>> I also didn't know the set of algebraic numbers had cardinality.
>> 
>> If you believe in the axiom of choice...
> 
It is unclear to me what this means.
If you deduce theorems using one set of axioms, you get one set of theorems.
If you use another, slightly larger set of axioms (including the axiom of
choice, that is), you get a different set of theorems.

Neither of these is likely to map terribly well to physical reality; you
need an entirely different set of axioms for that than most people are used
to. It doesn't matter, though; both sets are still valid maths, and may be
useful elsewhere.

> Of course it only make sense to "believe" in an axiom if you are a
> Platonist (which I am not).
> 
How does being a Platonist (or not) have any being on things? In conjunction
with maths, I'd say you're referring to arithmetic realism, which is an
interesting idea. It doesn't appear[1] to affect us personally, though.

1: Very literally "appear"; if it's true, it has some downright angsty
ontological consequences. Let's not get into that, though.
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87zn0t98pr.fsf@david-steuber.com>
··········@verizon.net (William D Clinger) writes:

> If you believe in the axiom of choice, and in any of the usual
> formalizations of set theory, then your beliefs imply that every
> set has a cardinality.  (I don't believe in the axiom of choice
> myself, but I know a lot of people who say they do.)

I've not heard of the axiom of choice.  I googled it and found some
information.  It didn't help me.

http://www.math.vanderbilt.edu/~schectex/ccc/choice.html
http://mathworld.wolfram.com/AxiomofChoice.html

Then things got weird:

http://www.cs.uu.nl/people/jur/progrock.html
http://www.axiomofchoice.com/
http://www.xdot25.com/artists/axiom.html

Math is really complicated.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: lin8080
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <41B3BA95.30028A53@freenet.de>
David Steuber schrieb:

> Math is really complicated.

Don't panic. Do it like a mathematican: make a new definition :)

stefan
From: Harald Hanche-Olsen
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pcois7gltue.fsf@shuttle.math.ntnu.no>
+ lin8080 <·······@freenet.de>:

| David Steuber schrieb:
| 
| > Math is really complicated.
| 
| Don't panic. Do it like a mathematican: make a new definition :)

Ah, but that is not where the fun lies.  To quote something (possibly
inaccuarately) I saw in a book a while ago:

  Having refreshed ourselves in the oasis of proof,
  we return to the desert of definition.

A sentiment I am sure most mathematicians share.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: lin8080
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <41B5B556.DDB3BD3C@freenet.de>
Harald Hanche-Olsen schrieb:
> + lin8080 <·······@freenet.de>:
> | David Steuber schrieb:

> | > Math is really complicated.
> | Don't panic. Do it like a mathematican: make a new definition :)

> Ah, but that is not where the fun lies.  To quote something (possibly
> inaccuarately) I saw in a book a while ago:

>   Having refreshed ourselves in the oasis of proof,
>   we return to the desert of definition.

> A sentiment I am sure most mathematicians share.

Hm. Usualy a definition is done by one, proof by many. So oasis and
desert should be changed. The reason to say like the cite does may be,
there are to many definitions. That could be changed by definition. But
with every new definition you walk a step away from universality towards
special and this will require more definitions than CL now own. (A
modular system is not that bad.)

Do you think, lisp is organized like mathematic? Is it best, to copy
math stuctures into an interpreters internals? I say no, because have a
look how your brain did it. There is the association, something like: it
looks like.... In math you only have exatly this or that ortherwise you
need a new definition.

Hm, to make you feel like home with the interpreter needs some human
elements. So what is the next definition?

stefan 

(fun-factor 4)
From: Jon Boone
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <m3r7m3w70i.fsf@spiritus.delamancha.org>
David Steuber <·····@david-steuber.com> writes:

> ··········@verizon.net (William D Clinger) writes:
>
>> If you believe in the axiom of choice, and in any of the usual
>> formalizations of set theory, then your beliefs imply that every
>> set has a cardinality.  (I don't believe in the axiom of choice
>> myself, but I know a lot of people who say they do.)
>
> I've not heard of the axiom of choice.  I googled it and found some
> information.  It didn't help me.
>
> http://www.math.vanderbilt.edu/~schectex/ccc/choice.html
> http://mathworld.wolfram.com/AxiomofChoice.html

  I think that the first link you posted is a good one for getting the
  basic idea.  It directly addresses the cardinality issue that Will
  Clinger brought up.  Did you not find it helpful in understanding
  his remark?

--jon
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <871xe23qzs.fsf@david-steuber.com>
Jon Boone <········@delamancha.org> writes:

> David Steuber <·····@david-steuber.com> writes:
> 
> >
> > I've not heard of the axiom of choice.  I googled it and found some
> > information.  It didn't help me.
> >
> > http://www.math.vanderbilt.edu/~schectex/ccc/choice.html
> > http://mathworld.wolfram.com/AxiomofChoice.html
> 
>   I think that the first link you posted is a good one for getting the
>   basic idea.  It directly addresses the cardinality issue that Will
>   Clinger brought up.  Did you not find it helpful in understanding
>   his remark?

The problem has to do with the math getting outside my math
experience.  Set theory is something I've never been exposed to.  At
least not beyond the really simple things such as union, intersection,
etc of sets at an elementary level.  I've never even touched number
theory.

I get that you can represent (sqrt 2) as X^2 - 2 = 0 and that shows
that the set of algebraic numbers has cardinality.  At least I am
making the leap that since the set of integers is cardinal and
polynomial representations use integers and are therefor also cardinal
for the same reason that both the set of all integers and the set of
positive integers is cardinal.

What I do not see is how exact arithmetic can be done on (sqrt 2)
short of doing some symbolic algebra so that the final result is
rational.  Say, (expt (sqrt 2) 2).

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Harald Hanche-Olsen
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pcowtvtafmr.fsf@shuttle.math.ntnu.no>
+ David Steuber <·····@david-steuber.com>:

| What I do not see is how exact arithmetic can be done on (sqrt 2)
| short of doing some symbolic algebra so that the final result is
| rational.  Say, (expt (sqrt 2) 2).

Well, this is not the place for a full tutorial, but let me scratch
the surface a bit.  Write X for (sqrt 2).  So (reverting to classical
algebraic notation rather than lisp notation, if it is allowed) X
satisfies X^2=2.  Now the set of all combinations aX+b where a, b are
rational is a field, meaning you can do all the algebraic operations
(including division) in this field.
For example, (aX+b)(cX+d)=2ac+bd+(ac+bd)X, and
1/(aX+b)=(aX-b)/(2a^2-b^2) which is trivially written on the same form
(note that 2a^2-b^2 is not 0 because 2 is irrational).

So this is what I mean by exact arithmetic:  You can compute all sorts
of algebraic combinations of numbers on the form aX+b, only using
operations on rational numbers (exact!) and the relation X^2=2
(exact!).

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Andreas Eder
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <m3d5xluwlu.fsf@banff.eder.de>
David Steuber <·····@david-steuber.com> writes:

> I get that you can represent (sqrt 2) as X^2 - 2 = 0 and that shows
> that the set of algebraic numbers has cardinality.

Every set has a cardinality (If you use the Axoim of Choice). What the
above 'representation' shows is that the algeraic numbers have the
same cardinality as the natural numbers (aleph-0).

> At least I am
> making the leap that since the set of integers is cardinal and

Repeat, every set has a cardinality.
They do not all have the same cardinality, since the powerset of a set
is always of greater cardinality.

> What I do not see is how exact arithmetic can be done on (sqrt 2)
> short of doing some symbolic algebra so that the final result is
> rational.  Say, (expt (sqrt 2) 2).

You us the above representation of an algebraic number by its minimal
polynomial, together with a (rational) approximation do discriminate
the associated roots. I wouldn't call that symbolic algebra, it is in
fact rather similar to any other exact arithmetic.

In fact you can even go further and do exact arithmetic on the set of
all computable numbers.

Go google for 'exact arithmetic'. You'll find a lot of articles about
that.

Andreas
-- 
Wherever I lay my .emacs, there's my $HOME.
From: Wade Humeniuk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <lu1sd.279336$9b.148377@edtnps84>
David Steuber wrote:
> 
> I also didn't know the set of algebraic numbers had cardinality.  I
> thought it was more like the set of transcendental numbers or reals in
> general.  I don't know if I could follow a proof that the set of
> algebraic numbers is countable, but you are welcome to post one.
> 

http://en.wikipedia.org/wiki/Algebraic_closure

Read and follow other links from there.

Wade
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <878y8girie.fsf@qrnik.zagroda>
Wade Humeniuk <····································@telus.net> writes:

> Running it we get
>
> CL-USER 65 > (determine-exact-floats 5)
> 32

It's easy to show that there are 2^n exact, out of 10^n.
They are i/(2^n) for i from 0 to 2^n-1.

The denominator must be a power of 2. The number after multiplying by
10^n must yield an integer, which implies that the denominator can be
at most 2^n.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Hartmann Schaffer
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <FqNrd.58$Kf6.423@newscontent-01.sprint.ca>
Cameron MacKinnon wrote:
> Hartmann Schaffer wrote:
> 
>> i think you don't get what the problem is.  it is a plain fact that 
>> there are some limitations to floating point, some of them showing up 
>> much earlier than the cases you provided above.  one of the first to 
>> show up is that not all numbers can represented exactly.
> 
> 
> Not all numbers can be represented exactly, eh? Could you quantify the 
> problem for me? What percentage of non-integers (with n or fewer digits 
> to the right of the point) that can be represented exactly in base ten 
> cannot be represented exactly in IEEE FP?

no need for IEEE, you can use limited precision decimals.  try to 
represent 1/3, 1/7. pi, or e exactly in decimal with any precision you 
choose.

> Maybe this is just a basic truth, that not all fractional numbers 
> exactly representable in one base are exactly representable in another. 
> What percentage of numbers exactly representable in IEEE FP are not 
> exactly representable in base ten?

depends on the number of positions you want to use behind the decimal 
point, but that's not the problem.  if you do you arithmetic in binary, 
with input and output in decimal, you are bound to run into numbers that 
are exactly representable in one representation but not the other, and 
you are also bound to run into numbers that are not representable in 
either.  but you don't have to worry about that.  you can demonstrate 
the problem easily if you use finite precision decimal arithmetic (e.g. 
3 positions behind the decimal point:

first try

(1/3)*3
you'll end up with 0.999

next try

(0.999/3)*3 or ((1-0.001)/3)*3
and you'll end up with 0.999

without knowing the history of the calculation, how do you decide to 
print the result as 1?

>> the fact is that if you use some tool, you better be aware of the 
>> tool's limitations.  if you aren't, something will bite you eventually
> 
> 
> On one level, this is certainly true. But at a higher level, large 
> numbers of disfigured customers are indicative of tool design issues.

my heart is bleeding for them.  they decided to use a tool without 
understanding it

> Here's something I suspect to be an unfortunate truth: The man in the 
> street attributes more accuracy to numbers expressed as decimals than 
> those expressed as fractions. This makes the Lisper's advice of "use 
> rationals if you want accuracy, floats if you don't" counterintuitive.

the problem is commonly referred to as innumeracy

>> btw, to some extent i agree with you.  some applications, e.g. math 
>> drills for elementary school children, should take care of problems 
>> like that.  but not interfaces to programming languages, like e.g. the 
>> lisp repl.
> 
> This is utterly absurd. You admit that our machines can be coaxed into 
> producing correct answers, but think that only our children are worthy 
> of the effort? Those kids are just going to get confused later on, when 
> they reach the age of inaccuracy. Far better, I say, to chop two fingers 
> off each kindergarten entrant and educate them in octal. If we can't 
> adapt our machines to serve us, we must adapt ourselves to serve our 
> machines.

could you elaborate a little bit more on how it is utterly absurd?  the 
direct interface to the language connects you directly to the computing 
machinery and should show exactly what is going on there.  if you have 
an application where you know a little bit more about the problem 
domain, you might be able to massage the result to satisfy expectations. 
  but the lisp repl is not the place.  I would expect financial 
applications to present results in exact pennies though.  and i would 
expect a grade school math drill to use rational numbers where possible, 
so that the problem doesn't arise

hs
From: Cameron MacKinnon
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <LLudnVX8q_v5Ui3cRVn-pw@golden.net>
Hartmann Schaffer wrote:
> no need for IEEE, you can use limited precision decimals.  try to 
> represent 1/3, 1/7. pi, or e exactly in decimal with any precision you 
> choose.

When you find a large user community complaining about this, we can talk 
about it. Otherwise, it is irrelevant to the problem that users ARE 
complaining about.


> if you do you arithmetic in binary, 
> with input and output in decimal, you are bound to run into numbers that 
> are exactly representable in one representation but not the other, 
...[more handwaving]...
> without knowing the history of the calculation, how do you decide to 
> print the result as 1?

Right. Since users want their numbers to add, and since this is not 
possible given IEEE representation and no calculation history, the 
representation needs to change.

In the alternative, you can continue on your quest to convince users 
that both you and the computer are right, with the computer spewing out 
obviously incorrect answers in response to typical users' queries, and 
you defending the computer.


> could you elaborate a little bit more on how it is utterly absurd?  the 
> direct interface to the language connects you directly to the computing 
> machinery and should show exactly what is going on there.

Lisp's philosophy is that users benefit from not being able to directly 
access memory, or directly access the integer "add and ignore overflow" 
instruction supported by so many other languages. Why do you support 
direct access to a broken oracle in this case?
From: Fred Gilham
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <u7k6rzcbuz.fsf@snapdragon.csl.sri.com>
> In the alternative, you can continue on your quest to convince users
> that both you and the computer are right, with the computer spewing
> out obviously incorrect answers in response to typical users'
> queries, and you defending the computer.

Here's a nice common-lisp application:

snapdragon:~ > maxima
GCL (GNU Common Lisp)  (2.5.3) Fri Apr  9 16:58:51 PDT 2004
Licensed under GNU Library General Public License
Dedicated to the memory of W. Schelter
 
Use (help) to get some basic information on how to use GCL.
Maxima 5.9.0 http://maxima.sourceforge.net
Distributed under the GNU Public License. See the file COPYING.
Dedicated to the memory of William Schelter.
This is a development version of Maxima. The function bug_report()
provides bug reporting information.
(C1) .9 - .5;
 
(D1)                                  0.4

Seems like the issue is to distinguish between applications, which
have user-related specs, and language-level representations, which
have programmer-related specs.  The point is that there are floating
point numbers which behave a certain way.  These numbers are intended
for programmer use.  They behave the way they do because people have
sat down and figured out that it's a good idea for them to behave that
way.

The fact that some person might type at the lisp REPL and expect it to
act like a pocket calculator is not a good reason to change the way
these numbers are represented.  It might be a good reason to write a
"pocket calculator emulator" in common lisp, or to point someone at
Maxima.

The point of the example, by the way, is to show that it's perfectly
possible to create applications which do arithmetic in lisp the way a
naive user might expect.

Just for fun, here's another example.

(C15) %e^(%i * %pi);
 
(D15)                                 - 1

I hear that this also works on pocket calculators. :-)

-- 
Fred Gilham                                        ······@csl.sri.com
Democracy and capitalism seem to have triumphed.  But, appearances can
be deceiving.  Instead of celebrating capitalism's virtues, we offer
it grudging acceptance, contemptuous tolerance but only for its
capacity to feed the insatiable maw of socialism. We do not conclude
that socialism suffers from a fundamental and profound flaw. We
conclude instead that its ends are worthy of any sacrifice including
our freedom...                                 -- Janice Rogers Brown
From: Bradley J Lucier
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <coqrhc$e4d@arthur.cs.purdue.edu>
In article <··············@snapdragon.csl.sri.com>,
Fred Gilham  <······@snapdragon.csl.sri.com> wrote:
>
>> In the alternative, you can continue on your quest to convince users
>> that both you and the computer are right, with the computer spewing
>> out obviously incorrect answers in response to typical users'
>> queries, and you defending the computer.
>
>Here's a nice common-lisp application:
>
>snapdragon:~ > maxima
>provides bug reporting information.
>(C1) .9 - .5;
> 
>(D1)                                  0.4
>
>Seems like the issue is to distinguish between applications, which
>have user-related specs, and language-level representations, which
>have programmer-related specs.  The point is that there are floating
>point numbers which behave a certain way.  These numbers are intended
>for programmer use.  They behave the way they do because people have
>sat down and figured out that it's a good idea for them to behave that
>way.
>
>The fact that some person might type at the lisp REPL and expect it to
>act like a pocket calculator is not a good reason to change the way
>these numbers are represented.  It might be a good reason to write a
>"pocket calculator emulator" in common lisp, or to point someone at
>Maxima.

Wow, what an intelligent post.  (And, no, no sarcasm is intended at all.)

I wrote a homework on the web system for trigonometry students in Scheme.
When necessary, 0.9 - 0.4 evaluates to 0.5, because that's the way I programmed
it.  (And that's what students expect.)

And students have to enter sqrt(2) for some problems that expect
an exact answer; some answers are requested within a certain tolerance, and
those are computed internally using floating-point.

Floating-point numbers are tools that are used every day in very large, very
important computations.  Many people understand what they are useful for and
appreciate them for what they can do.

There was a proposal a few years back to (a) eliminate the numerical analysis
requirement for CS majors at Purdue and (b) replace it with some comments in
other courses about "the limitations of floating-point arithmetic".  I
pointed out that it would be more useful to teach students the capabilities of
floating-point arithmetic before teaching them the limitations of it.

Proposal (a) passed but proposal (b) failed.

Brad
From: Michael Fleming
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <n05sd.28264$zx1.24657@newssvr13.news.prodigy.com>
Cameron MacKinnon wrote:

> In the alternative, you can continue on your quest to convince users 
> that both you and the computer are right, with the computer spewing out 
> obviously incorrect answers in response to typical users' queries, and 
> you defending the computer.

The Lisp Listener is set up to convert decimal numbers to single-floats 
by default.

X(18): (describe 0.4)
0.4 is a SINGLE-FLOAT.
  It is a writable box.
  The hex representation is [#x3ecccccd].
X(16): (integer-decode-float 0.4)
13421773
-25
1
X(17): (* 13421773 (expt 2 -25) 1)
13421773/33554432

So when you type "0.4" at the prompt, a representation of 
13421773/33554432 is created.

Let's look at the result of (- 0.9 0.5):

X(19): (describe (- 0.9 0.5))
0.39999998 is a SINGLE-FLOAT.
  The hex representation is [#x3ecccccc].
X(20): (integer-decode-float (- 0.9 0.5))
13421772
-25
1
X(22): (* 13421772 (expt 2 -25) 1)
3355443/8388608

You had better understand this if you are going to use floating point 
representations of numbers!

X(24): (= 0.4 (- 0.9 0.5))
NIL

This says that the floating point representation of 0.4 does not equal 
the floating point representation of (- 0.9 0.4), which is absolutely 
correct.

Your mistake is supposing that 0.4 typed into the Listener is identical 
to the decimal number 0.4. It clearly isn't.

You can write a program in Lisp to handle "typical user queries" in 
anyway you want. I think some Listeners round floating point numbers for 
  display.

I like the Listener to give me the full representation of floating point 
results, but others may disagree.

There is nothing incorrect about the answers "spewed out" by the computer.
From: William D Clinger
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <fb74251e.0412031305.59111a5@posting.google.com>
Cameron MacKinnon <··········@clearspot.net> wrote:
> But calling users stupid or uneducated for expecting teenager-level 
> arithmetic from their computers ought to be discouraged. Our computers 
> are our tools; if they don't do what naive users reasonably expect, it's 
> time to fix the tools.

That settles it.  If (3 + 4) doesn't evaluate to 7 in Common Lisp,
then it's time to fix Common Lisp.

Will
From: Duane Rettig
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <41xe7gjju.fsf@franz.com>
··········@verizon.net (William D Clinger) writes:

> Cameron MacKinnon <··········@clearspot.net> wrote:
> > But calling users stupid or uneducated for expecting teenager-level 
> > arithmetic from their computers ought to be discouraged. Our computers 
> > are our tools; if they don't do what naive users reasonably expect, it's 
> > time to fix the tools.
> 
> That settles it.  If (3 + 4) doesn't evaluate to 7 in Common Lisp,
> then it's time to fix Common Lisp.

Yeah, right on.

CL-USER(1): (3 + 4)
Error: Funcall of 3 which is a non-function.
  [condition type: TYPE-ERROR]

Restart actions (select using :continue):
 0: Return to Top Level (an "abort" restart).
 1: Abort entirely from this (lisp) process.
[1] CL-USER(2): 

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Thomas A. Russ
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <ymiacsx7q8k.fsf@sevak.isi.edu>
Barry Margolin <······@alum.mit.edu> writes:

> 
> In article <················@mjolner.upc.no>,
>  "John Thingstad" <··············@chello.no> wrote:
> 
> > Floating point arithmetric is covered by taking a beginners course
> > in numerical analysis.
> > However, at some universities, it is possible to get a computer
> > science degree without such a cource.
> > Still, having taken one, I can recomend it.
> 
> I got my CS degree at MIT, arguably one of the best colleges for this 
> subject.  There were math requirements for the degree, but I don't think 
> numerical analysis was in there, IIRC.

Ditto.  And no, there was no required numerical analysis class.  I'm not
even sure if it was offered -- but then I was a symbolic AI guy anyway.

Anecdote: One of the odder things on the GRE computer science subject
test (http://www.gre.org/subdesc.html#compsci) was a question about
details of floating point machine representations.  I remember recalling
that I had never even heard of things like "Excess 40 notation", but I
managed to figure out what it must mean anyway.  Even more importantly,
I figured out why MIT not only didn't require the exam, but claimed that
the graduate admissions committee didn't even look at the scores.


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Adam Warner
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <pan.2004.11.22.23.26.53.188871@consulting.net.nz>
Hi Jens Axel Søgaard,

> Ossi Herrala wrote:
> 
>> I once heard a (good) explanation why this happens with floating-point
>> numbers, but I feel this is really wrong.
> 
> If you want the full explanation see the classic article "What Every
> Computer Scientist Should Know About Floating-Point Arithmetic":
> 
>      <http://docs.sun.com/source/806-3568/ncg_goldberg.html>
                    ^^^^^^^

I wish Sun had read this before designing Java.

public class JavaIsDangerous {
    public static void main (String[] args) {
        double intermediate = 0.0/0.0;
        double final_result = (intermediate<3.0) ? intermediate : 10.0;
        System.out.println("The final result is "+final_result);}}

When saved to the file JavaIsDangerous.java, compiled using "javac
JavaIsDangerous.java" and evaluated using "java JavaIsDangerous" it
returns "The final result is 10.0".

At its core this is a consequence of mandatory checked exceptions. If the
language designers admitted that floating point calculations such as
dividing zero by zero should throw an exception then they would have to
force programmers to catch exceptions for many floating point
calculations. So instead floating point arithmetic is allowed to propagate
non-numbers such as NaN ("Not a Number").

But if you admit that your real arithmetic is returning non-numbers then
you have to make up bizarre rules for real number predicates so that the
predicates also avoid throwing exceptions to be explicitly caught by
programmers. So you get nonsense appearing in the language specification
for "Numerical Comparison Operators <, <=, >, and >=" such as "If either
operand is NaN, then the result is false."

The intermediate calculation above is NaN and the so-called "Numerical
Comparison" of a non-number to the real number 3.0 is false, so 10.0 is
returned by the conditional expression above. The mistaken intermediate
result has been whitewashed by the language specification. In real code it
wouldn't be as obvious as this toy example.

"How Java's Floating-Point Hurts Everyone Everywhere"
<http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf>

[I touched upon point 3: "Infinities and NaNs unleashed without the
protection of floating-point traps and flags mandated by IEEE Standards
754/854 belie Java's claim to robustness."]

Regards,
Adam
From: Björn Lindberg
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <hcs1xemorsp.fsf@my.nada.kth.se>
················@ee.oulu.fi.invalid (Ossi Herrala) writes:

> Hello List 
> 
> And thanks to all of you that I'm now addicted of studying and learning
> Common Lisp. Great language! :)
> 
> Anyway, I encountered some strange behaviour:
> 
> In LispWorks:
> 
> > (- 0.9 0.5)
> 0.4
> 
> But in CLisp and CMUCL:
> 
> > (- 0.9 0.5)
> 0.39999998
> 
> (On a side note, this happens in Python also:
> 
> >>> 0.9 - 0.5
> 0.40000000000000002
> )
> 
> I once heard a (good) explanation why this happens with floating-point
> numbers, but I feel this is really wrong.
> 
> Is there anything that CL implementations (Clisp and CMUCL) should do
> differently? Or should I do something (some kind of work around?) to
> get correct and accurate results?
> 
> For example this is ugly work around:
> 
> > (/ (- (* 0.9 100) (* 0.5 100)) 100)
> 0.4
> 
> But it works in CLisp, CMUCL and LispWorks. Not that I accept such
> hacks ;)

If you want exact results, you should use exact
arithmetic. Fortunately, CL provides just this:

(- 9/10 5/10)
==> 2/5

And this is not a work-around.


Bj�rn
From: Jeff M.
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <1101137499.508398.296640@c13g2000cwb.googlegroups.com>
Along the same lines, there is a nice function: RATIONALIZE that you
can use to get an exact value from your (inexact) floating-point value:
(- (rationalize 0.9) (rationalize 0.5))
 => 2/5

Jeff M.
From: David Steuber
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <87u0rhbdjs.fsf@david-steuber.com>
"Jeff M." <·······@gmail.com> writes:

> Along the same lines, there is a nice function: RATIONALIZE that you
> can use to get an exact value from your (inexact) floating-point value:
> (- (rationalize 0.9) (rationalize 0.5))
>  => 2/5

This is nice and all, but there are a lot of irrational people out
there.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Sampo Smolander
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <co30h7$67t$1@oravannahka.helsinki.fi>
Ossi Herrala <················@ee.oulu.fi.invalid> wrote:
> In LispWorks:
> > (- 0.9 0.5)
> 0.4

> But in CLisp and CMUCL:
> > (- 0.9 0.5)
> 0.39999998

> (On a side note, this happens in Python also:
> >>> 0.9 - 0.5
> 0.40000000000000002
> )

LispWorks seems to have some kind of "pretty printer" active on
default(?). I mean, it seems to do some rounding before it prints the
number on the screen. CLisp and CMUCL seem to use normal floats, as they
should since you did not ask for double precision. Python seems to use
doubles by default.

All in all, this is normal floating point behaviour. (Also LispWork thinks
the answer is 0.39999998, it just rounds it to 0.4 when printing it on the
screen).

This happens in every programming language.

Btw, it you want to ask more about floating point, you can also do it 
in the finnish newsgroup sfnet.atk.ohjelmointi.alkeet. Of cource here, 
too.
From: Geoffrey Summerhayes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <Zippd.59245$Le1.1231367@news20.bellglobal.com>
"Sampo Smolander" <·························@helsinki.fi> wrote in message 
·················@oravannahka.helsinki.fi...
> Ossi Herrala <················@ee.oulu.fi.invalid> wrote:
>> In LispWorks:
>> > (- 0.9 0.5)
>> 0.4
>
>> But in CLisp and CMUCL:
>> > (- 0.9 0.5)
>> 0.39999998
>
>> (On a side note, this happens in Python also:
>> >>> 0.9 - 0.5
>> 0.40000000000000002
>> )
>
> LispWorks seems to have some kind of "pretty printer" active on
> default(?). I mean, it seems to do some rounding before it prints the
> number on the screen. CLisp and CMUCL seem to use normal floats, as they
> should since you did not ask for double precision. Python seems to use
> doubles by default.

IMO it would be a bad thing if LW actually did that, I'd prefer

(= (- 0.9 0.5) (read-from-string (write-to-string (- 0.9 0.5))))

returned T.

In fact the reason LW prints 0.4 is that it uses doubles like
Python, just has a better printer ;)

http://www.lispworks.com/reference/lw43/LWRM/html/lwref-49.htm#53455

--
Geoff 
From: Sampo Smolander
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <co6naa$nv4$1@oravannahka.helsinki.fi>
Geoffrey Summerhayes <·············@hotmail.com> wrote:
> IMO it would be a bad thing if LW actually did that, I'd prefer
> (= (- 0.9 0.5) (read-from-string (write-to-string (- 0.9 0.5))))
> returned T.

> In fact the reason LW prints 0.4 is that it uses doubles like
> Python, just has a better printer ;)
> http://www.lispworks.com/reference/lw43/LWRM/html/lwref-49.htm#53455

So do you mean LW prints 0.40000000000000002 as 0.4 because it
truncates, or because it rounds?
From: Geoffrey Summerhayes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <9MKpd.60086$Ro.2252854@news20.bellglobal.com>
"Sampo Smolander" <·························@helsinki.fi> wrote in message 
·················@oravannahka.helsinki.fi...
> Geoffrey Summerhayes <·············@hotmail.com> wrote:
>> IMO it would be a bad thing if LW actually did that, I'd prefer
>> (= (- 0.9 0.5) (read-from-string (write-to-string (- 0.9 0.5))))
>> returned T.
>
>> In fact the reason LW prints 0.4 is that it uses doubles like
>> Python, just has a better printer ;)
>> http://www.lispworks.com/reference/lw43/LWRM/html/lwref-49.htm#53455
>
> So do you mean LW prints 0.40000000000000002 as 0.4 because it
> truncates, or because it rounds?

Well the general term used in the literature is 'round', but it's
very misleading. What do you call it when Python prints 0.4
as 0.40000000000000002?

Each double-float represents an infinite number of numbers, in
this case, a range roughly from 0.40000000000000004 to
0.39999999999999997

Looking at the bits:
CL-USER 14 > (multiple-value-bind (significand exponent)
                 (integer-decode-float 0.4)
               (format nil "s:~B e:~B" significand exponent))
"s:11001100110011001100110011001100110011001100110011010 e:-110110"

CL-USER 15 > (multiple-value-bind (significand exponent)
                 (integer-decode-float 0.40000000000000002)
               (format nil "s:~B e:~B" significand exponent))
"s:11001100110011001100110011001100110011001100110011010 e:-110110"

Most, if not all, Lisps use an algorithm that prints a
number with the shortest print representation available in
the range. Ideally any number printed should produce the
same fp representation when read back in.

Unfortunately, both LW 4.3.6(PE) and Corman 2.5(eval) failed on this:

(let ((x 0.39999999999999995d0))
   (= x (read-from-string (write-to-string x))))

Corman appears to be using a single float for this.
Allegro 6.2(trial) and Clisp 2.31 both gave a result of T.

--
Geoff
From: Joseph Oswald
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <e86bd0ec.0412011122.4f95b058@posting.google.com>
"Geoffrey Summerhayes" <·············@hotmail.com> wrote in message news:<······················@news20.bellglobal.com>...

> Each double-float represents an infinite number of numbers, in
> this case, a range roughly from 0.40000000000000004 to
> 0.39999999999999997
> 

Actually, this is by no means obvious, unless given a specific
context.

The HyperSpec for Common Lisp, in fact, defines a float as

"A float is a mathematical rational (but not a Common Lisp rational)
of the form s*f*b^e-p, where s is +1 or -1, the sign; b is an integer
greater than 1, the base or radix of the representation; p is a
positive integer, the precision (in base-b digits) of the float; f is
a positive integer between b^p-1 and b^p-1 (inclusive), the
significand; and e is an integer, the exponent."

This is an exact value. It does not represent a range of values. Of
course, there is a range of reals which will be translated to this
exact value, but that is because your text strings have more digits of
precision than the floating point representation can handle.

I am not a numerical analyst, but I understand there are sound reasons
for favoring this paradigm over your paradigm using intervals. Once
you do operations on intervals, the relative intervals grow or shrink;
this will almost certainly break your concept unless your
representation explicitly carries the right interval around with the
number. The simplistic use of the nearest neighbors in the
representation will not work at all. Even if you decide to compute
everything with intervals, you will quickly run into a bunch of hairy
problems involving branch cuts and so forth, where pretty soon you
won't even know the sign of the number. At which point it's a little
silly trying to faithfully preserve the original ignorance.
From: Geoffrey Summerhayes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <qPJrd.20825$kI6.1306139@news20.bellglobal.com>
"Joseph Oswald" <··············@hotmail.com> wrote in message 
·································@posting.google.com...
> "Geoffrey Summerhayes" <·············@hotmail.com> wrote in message 
> news:<······················@news20.bellglobal.com>...
>
>> Each double-float represents an infinite number of numbers, in
>> this case, a range roughly from 0.40000000000000004 to
>> 0.39999999999999997
>>
>
> Actually, this is by no means obvious, unless given a specific
> context.
>
> The HyperSpec for Common Lisp, in fact, defines a float as
>
> "A float is a mathematical rational (but not a Common Lisp rational)
> of the form s*f*b^e-p, where s is +1 or -1, the sign; b is an integer
> greater than 1, the base or radix of the representation; p is a
> positive integer, the precision (in base-b digits) of the float; f is
> a positive integer between b^p-1 and b^p-1 (inclusive), the
> significand; and e is an integer, the exponent."
>
> This is an exact value. It does not represent a range of values. Of
> course, there is a range of reals which will be translated to this
> exact value, but that is because your text strings have more digits of
> precision than the floating point representation can handle.
>
> I am not a numerical analyst, but I understand there are sound reasons
> for favoring this paradigm over your paradigm using intervals. Once
> you do operations on intervals, the relative intervals grow or shrink;
> this will almost certainly break your concept unless your
> representation explicitly carries the right interval around with the
> number. The simplistic use of the nearest neighbors in the
> representation will not work at all. Even if you decide to compute
> everything with intervals, you will quickly run into a bunch of hairy
> problems involving branch cuts and so forth, where pretty soon you
> won't even know the sign of the number. At which point it's a little
> silly trying to faithfully preserve the original ignorance.

Most of us do not enter floating point numbers in their exact binary
but enter decimal instead. You seem to imply that as long as you stick
to numbers like 0.3 that the number you get internally will be its
exact value, but that's not what happens.

If you get the exact value, as long as you don't perform any operations
on the number or are very careful about what operations you do perform
you're set. Otherwise you run into

http://www.lispworks.com/reference/HyperSpec/Body/12_adb.htm

For the most part it isn't too bad, the error never becomes large enough
to make a difference. OTOH, it's the reason we do have courses in numerical
analysis, some times it gets incredibly bad, orders of magnitude bad.

And then there's chaos theory where equations are designed to be
highly sensitive to initial conditions.

--
Geoff
From: Christophe Rhodes
Subject: Re: Floating-point arithmetic in CL
Date: 
Message-ID: <sqekifwoe1.fsf@cam.ac.uk>
Sampo Smolander <·························@helsinki.fi> writes:

> Geoffrey Summerhayes <·············@hotmail.com> wrote:
>> IMO it would be a bad thing if LW actually did that, I'd prefer
>> (= (- 0.9 0.5) (read-from-string (write-to-string (- 0.9 0.5))))
>> returned T.
>
>> In fact the reason LW prints 0.4 is that it uses doubles like
>> Python, just has a better printer ;)
>> http://www.lispworks.com/reference/lw43/LWRM/html/lwref-49.htm#53455
>
> So do you mean LW prints 0.40000000000000002 as 0.4 because it
> truncates, or because it rounds?

Presumably it uses one of the well-known[*] algorithms for printing
floating point numbers accurately -- printing no more digits than need
be printed to identify a given floating point number unambiguously.

Christophe

[*] Actually, experience tells me that these algorithms are not well
known.  Both the MPEG-7 standard for encoding audio metadata and the
PHP language display shocking ignorance of the ability to perform
well-defined manipulations on floating point numbers.

@inproceedings{burger96printing,
    author = {Burger, Robert G. and Dybvig, R. Kent},
    title = "Printing Floating-Point Numbers Quickly and Accurately",
    booktitle = "{SIGPLAN} Conference on Programming Language Design and Implementation",
    pages = "108--116",
    year = "1996" }

@inproceedings{dragon,
    author = {Steele, Jr, Guy L. and White, Jon L.},
    title = "How to print floating-point numbers accurately",
    booktitle = "{SIGPLAN} Conference on Programming Language Design and Implementation",
    pages = "112--126",
    year = "1990" }

@inproceedings{clinger,
    author = {Clinger, William D.},
    title = "How to read floating point numbers accurately",
    booktitle = "{SIGPLAN} Conference on Programming Language Design and Implementation",
    pages = "92--101",
    year = "1990" }