From: Hidayet Tunc Simsek
Subject: arithmetic
Date: 
Message-ID: <38163A52.343A6D13@EECS.Berkeley.Edu>
Hi, here is my problem:

* (values  (lisp-implementation-type)  (lisp-implementation-version)
(machine-type))
 
"CMU Common Lisp"
"18b"
"SPARCstation"
* (+ 0.6 0.1)
 
0.70000005


Where does the .00000005 come from?  This accumulates into an undesired
result.
Is there a way to avoid this?

Thanks,
Tunc

From: Chuck Fry
Subject: Re: arithmetic
Date: 
Message-ID: <38164599$0$223@nntp1.ba.best.com>
In article <·················@EECS.Berkeley.Edu>,
Hidayet Tunc Simsek  <······@EECS.Berkeley.Edu> wrote:
>* (+ 0.6 0.1)
> 
>0.70000005
>
>
>Where does the .00000005 come from?  This accumulates into an undesired
>result.

JonL White presented a paper which mentioned this very topic at the most
recent Lisp Users Group meeting.  In a nutshell, it is the unavoidable
result of attempting to express decimal fractions as binary fractions.
The internal representations of the original numbers aren't exactly
equal to the exact decimal fractions, nor can these numbers be exactly
represented in a finite number of bits as a binary fraction.

The printer may also be adding to the confusion by not accurately
representing the result of the addition.  This was also mentioned in
JonL's presentation.

The addition itself is accurate.

For what it's worth, I get the same exact result in Allegro CL 5.0 on my
Sparc.

>Is there a way to avoid this?

Yes.  Express all your floating point numbers in base 2. :-(

 -- Chuck
--
	    Chuck Fry -- Jack of all trades, master of none
 ······@chucko.com (text only please)  ········@home.com (MIME enabled)
Lisp bigot, car nut, photographer, sometime guitarist and mountain biker
The addresses above are real.  All spammers will be reported to their ISPs.
From: H. Tunc Simsek
Subject: Re: arithmetic
Date: 
Message-ID: <3816918D.881E9833@EECS.Berkeley.Edu>
Chuck Fry wrote:
> 
> In article <·················@EECS.Berkeley.Edu>,
> Hidayet Tunc Simsek  <······@EECS.Berkeley.Edu> wrote:
> >* (+ 0.6 0.1)
> >
> >0.70000005
> >
> >
> >Where does the .00000005 come from?  This accumulates into an undesired
> >result.
> 
> JonL White presented a paper which mentioned this very topic at the most
> recent Lisp Users Group meeting.  In a nutshell, it is the unavoidable
> result of attempting to express decimal fractions as binary fractions.
> The internal representations of the original numbers aren't exactly
> equal to the exact decimal fractions, nor can these numbers be exactly
> represented in a finite number of bits as a binary fraction.
> 
> The printer may also be adding to the confusion by not accurately
> representing the result of the addition.  This was also mentioned in
> JonL's presentation.
> 
> The addition itself is accurate.
> 

I don't think the addition is accurate:

* (+ 0.6 0.1)

0.70000005
* (= 0.7 (+ 0.6 0.1))

NIL



> For what it's worth, I get the same exact result in Allegro CL 5.0 on my
> Sparc.


My Windows Allegro doesn't suffer this problem, at least for this
particular 
example:


> (= 0.7 (+ 0.6 0.1))
T
> (lisp-implementation-type)
"Allegro CL for Windows"
> (lisp-implementation-version)
"3.0.2 Release: 15-Jan-97 18:56"
> 
> 
> >Is there a way to avoid this?
> 
> Yes.  Express all your floating point numbers in base 2. :-(
> 
>  -- Chuck
> --
>             Chuck Fry -- Jack of all trades, master of none
>  ······@chucko.com (text only please)  ········@home.com (MIME enabled)
> Lisp bigot, car nut, photographer, sometime guitarist and mountain biker
> The addresses above are real.  All spammers will be reported to their ISPs.


The entire idea that there is not much one can do about this just
doesn't make sense.
I've used C and Matlab over the years and such a problem never caught my
attention,
at least when dealing with an accuracy of 0.1.

Tunc
From: Chuck Fry
Subject: Re: arithmetic
Date: 
Message-ID: <38169d69$0$216@nntp1.ba.best.com>
In article <·················@EECS.Berkeley.Edu>,
H. Tunc Simsek <······@EECS.Berkeley.Edu> wrote:
>Chuck Fry wrote:
>> In article <·················@EECS.Berkeley.Edu>,
>> Hidayet Tunc Simsek  <······@EECS.Berkeley.Edu> wrote:
>> >* (+ 0.6 0.1)
>> >
>> >0.70000005
>> >
>> >
>> >Where does the .00000005 come from?  This accumulates into an undesired
>> >result.
>> 
>> JonL White presented a paper which mentioned this very topic at the most
>> recent Lisp Users Group meeting.  In a nutshell, it is the unavoidable
>> result of attempting to express decimal fractions as binary fractions.
>> The internal representations of the original numbers aren't exactly
>> equal to the exact decimal fractions, nor can these numbers be exactly
>> represented in a finite number of bits as a binary fraction.
>> 
>> The printer may also be adding to the confusion by not accurately
>> representing the result of the addition.  This was also mentioned in
>> JonL's presentation.
>> 
>> The addition itself is accurate.
>> 
>
>I don't think the addition is accurate:
>
>* (= 0.7 (+ 0.6 0.1))
>
>NIL

The primary problem is that the *operands* are, unavoidably,
inaccurately represented as binary floating point numbers.  And yes,
that includes the "0.7" to which you compare the result.

I will admit you are right about the addition; there is likely some
rounding error in the addition, because of the disparity in magnitudes
between the operands, and the need to represent the result in a fixed
width format.  But the major source of the error is the fact that none
of these numbers can be exactly represented as a binary fraction.

>> For what it's worth, I get the same exact result in Allegro CL 5.0 on my
>> Sparc.
>
>My Windows Allegro doesn't suffer this problem, at least for this
>particular 
>example:
>
>
>> (= 0.7 (+ 0.6 0.1))
>T
>> (lisp-implementation-type)
>"Allegro CL for Windows"
>> (lisp-implementation-version)
>"3.0.2 Release: 15-Jan-97 18:56"

That is a completely different code base from ACL 5.0x, and it may use a
different floating point representation.  What is the value of the
parameter *READ-DEFAULT-FLOAT-FORMAT* in this environment?

>The entire idea that there is not much one can do about this just
>doesn't make sense.

It makes sense once you realize that 0.1, 0.6, and 0.7 in decimal are
all infinitely repeating fractions when accurately converted to binary
representations, and that there is bound to be some rounding error when
those fractions are represented as 24 bit mantissas in IEEE single
precision format.  Do the conversion by hand and the problem will become
obvious.

>I've used C and Matlab over the years and such a problem never caught my
>attention,
>at least when dealing with an accuracy of 0.1.

For all I know Matlab uses rational numbers instead of floats, which
representation completely avoids the problem.  (This is why Common Lisp
has rationals, BTW, as Macsyma needed them.)

As for C, you either didn't look closely, or were using double floats
(which reduce the magnitude of the problem but don't eliminate it), or
you weren't printing the results to their full precision.  This problem
is unavoidable in any language.

 -- Chuck
--
	    Chuck Fry -- Jack of all trades, master of none
 ······@chucko.com (text only please)  ········@home.com (MIME enabled)
Lisp bigot, car nut, photographer, sometime guitarist and mountain biker
The addresses above are real.  All spammers will be reported to their ISPs.
From: Marco Antoniotti
Subject: Re: arithmetic
Date: 
Message-ID: <lwogdlqmq4.fsf@parades.rm.cnr.it>
It turns out that CMUCL 18b (on an Ultra) has
*READ-DEFAULT-FLOAT-FORMAT* set to SINGLE-FLOAT.  Setting it to
DOUBLE-FLOAT makes things work (in this case :) ).  Even without the
the setting of *READ-DEFAULT-FLOAT-FORMAT*, you get (in CMUCL 18b on
Solaris)

* (= 0.7d0 (+ 0.1d0 0.6d0))
 
T

I.e. by forcing the the reader to create DOUBLE-FLOATs for the
constants.

Also note the following example (where I made sure that the types are
what I need)

File testFloat.c
==============================================================================
#include <stdio.h>

void main()
{
  int b;
  float point_seven = 0.7;
  float point_one   = 0.1;
  float point_six   = 0.6;

  b = point_seven == point_one + point_six;
  printf("... and the value is %d\n", b);
}
==============================================================================
·······@copernico:C 53> gcc -O testFloat.c 
·······@copernico:C 54> a.out
... and the value is 0
==============================================================================

... as expected.

Cheers

PS. Just for Tunc: this is the reason why in Shift
(http://www.path.berkeley.edu/shift) 'number' is really
a 'double'.

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Christopher R. Barry
Subject: Re: arithmetic
Date: 
Message-ID: <87d7u0psf3.fsf@2xtreme.net>
Marco Antoniotti <·······@parades.rm.cnr.it> writes:

> void main()
  ^^^^^^^^^

Shame on you!!!  :-)

Christopher
From: H. Tunc Simsek
Subject: Re: arithmetic
Date: 
Message-ID: <3816AC77.8CCE2ECA@EECS.Berkeley.Edu>
How would you ensure that a function ensures that its arguments are
read as double, or even better if SEQ is my function that 
generates the sequence from init to final by step:

> (seq 0 0.1 0.5)

 '(0 0.1 0.2 0.3 0.4 0.5)

I would also like to be able to say

> (seq 0 1/10 1/2)
  
or use integer arguments.  In all cases it'd be nice to have the
generated
sequence be of the same type as the arguments (or the type suggested by
them).
Is there a really short way to do this or would we need to dispatch on 
the argument type?

Tunc

Marco Antoniotti wrote:
> 
> It turns out that CMUCL 18b (on an Ultra) has
> *READ-DEFAULT-FLOAT-FORMAT* set to SINGLE-FLOAT.  Setting it to
> DOUBLE-FLOAT makes things work (in this case :) ).  Even without the
> the setting of *READ-DEFAULT-FLOAT-FORMAT*, you get (in CMUCL 18b on
> Solaris)
> 
> * (= 0.7d0 (+ 0.1d0 0.6d0))
> 
> T
> 
> I.e. by forcing the the reader to create DOUBLE-FLOATs for the
> constants.
> 
> Also note the following example (where I made sure that the types are
> what I need)
> 
> File testFloat.c
> ==============================================================================
> #include <stdio.h>
> 
> void main()
> {
>   int b;
>   float point_seven = 0.7;
>   float point_one   = 0.1;
>   float point_six   = 0.6;
> 
>   b = point_seven == point_one + point_six;
>   printf("... and the value is %d\n", b);
> }
> ==============================================================================
> ·······@copernico:C 53> gcc -O testFloat.c
> ·······@copernico:C 54> a.out
> ... and the value is 0
> ==============================================================================
> 
> ... as expected.
> 
> Cheers
> 
> PS. Just for Tunc: this is the reason why in Shift
> (http://www.path.berkeley.edu/shift) 'number' is really
> a 'double'.
> 
> --
> Marco Antoniotti ===========================================
> PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
> tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
> http://www.parades.rm.cnr.it/~marcoxa
From: Christopher R. Barry
Subject: Re: arithmetic
Date: 
Message-ID: <873duwprji.fsf@2xtreme.net>
"H. Tunc Simsek" <······@EECS.Berkeley.Edu> writes:

> How would you ensure that a function ensures that its arguments are
> read as double, or even better if SEQ is my function that 
> generates the sequence from init to final by step:
> 
> > (seq 0 0.1 0.5)
> 
>  '(0 0.1 0.2 0.3 0.4 0.5)
> 
> I would also like to be able to say
> 
> > (seq 0 1/10 1/2)
>   
> or use integer arguments.  In all cases it'd be nice to have the
> generated
> sequence be of the same type as the arguments (or the type suggested by
> them).
> Is there a really short way to do this or would we need to dispatch on 
> the argument type?

(defun seq (from by to)
  (loop with *read-default-float-format* = 'double-float
        for i from from to to by by
        collect i))

Christopher
From: Barry Margolin
Subject: Re: arithmetic
Date: 
Message-ID: <oIJR3.84$uC6.3571@burlma1-snr2>
In article <··············@2xtreme.net>,
Christopher R. Barry <······@2xtreme.net> wrote:
>(defun seq (from by to)
>  (loop with *read-default-float-format* = 'double-float
>        for i from from to to by by
>        collect i))

Did you really mean this seriously (I didn't see a smiley)?
*read-default-float-format* only affects I/O.  By the time this function is
run, the numbers will have been read in already.  And when the function's
return value is printed by the read-eval-print loop, the variable binding
will have been restored.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Christopher R. Barry
Subject: Re: arithmetic
Date: 
Message-ID: <87ogdko1p5.fsf@2xtreme.net>
Barry Margolin <······@bbnplanet.com> writes:

> In article <··············@2xtreme.net>,
> Christopher R. Barry <······@2xtreme.net> wrote:
> >(defun seq (from by to)
> >  (loop with *read-default-float-format* = 'double-float
> >        for i from from to to by by
> >        collect i))
> 
> Did you really mean this seriously (I didn't see a smiley)?
> *read-default-float-format* only affects I/O.  By the time this function is
> run, the numbers will have been read in already.  And when the function's
> return value is printed by the read-eval-print loop, the variable binding
> will have been restored.

I didn't really test it or think about it. The binding doesn't matter
when printing the result. It's only important that it is in effect
when you first read the floats because during the computation you
need:

  USER(9): (= (+ 0.1 0.6) 0.7)
  NIL

  USER(10): (= (+ 0.1d0 0.6d0) 0.7d0)
  T

so that:

  (seq 0 0.1 0.7) => (0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7)

instead of:

  (seq 0 0.1 0.7) => (0.0 0.1 0.2 0.3 0.4 0.5 0.6)


Even if the floats were read as doubles, I didn't do it right anyways,
because adding 0.1 six times would accumulate enough round-off error
that the sequence would probably only contain 0.0-0.6 anyways. Or even
if that worked, for a large enough sequence it would fail. You get
what you pay for around here.

Anyways, you would have to round at each step, or do a #'< instead of
#'= comparison like Howard was talking about.

Christopher
From: Barry Margolin
Subject: Re: arithmetic
Date: 
Message-ID: <wCLR3.92$uC6.3818@burlma1-snr2>
In article <··············@2xtreme.net>,
Christopher R. Barry <······@2xtreme.net> wrote:
>Barry Margolin <······@bbnplanet.com> writes:
>
>> In article <··············@2xtreme.net>,
>> Christopher R. Barry <······@2xtreme.net> wrote:
>> >(defun seq (from by to)
>> >  (loop with *read-default-float-format* = 'double-float
>> >        for i from from to to by by
>> >        collect i))
>> 
>> Did you really mean this seriously (I didn't see a smiley)?
>> *read-default-float-format* only affects I/O.  By the time this function is
>> run, the numbers will have been read in already.  And when the function's
>> return value is printed by the read-eval-print loop, the variable binding
>> will have been restored.
>
>I didn't really test it or think about it. The binding doesn't matter
>when printing the result. It's only important that it is in effect
>when you first read the floats because during the computation you
>need:

But it *isn't* in effect when you first read the floats.  The floats are
read when you type (seq 0 0.1 0.7), and the variable isn't bound until this
is evaluated.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Christopher R. Barry
Subject: Re: arithmetic
Date: 
Message-ID: <87puxzmn5d.fsf@2xtreme.net>
Barry Margolin <······@bbnplanet.com> writes:

> In article <··············@2xtreme.net>,
> Christopher R. Barry <······@2xtreme.net> wrote:
> >Barry Margolin <······@bbnplanet.com> writes:
> >
> >> In article <··············@2xtreme.net>,
> >> Christopher R. Barry <······@2xtreme.net> wrote:
> >> >(defun seq (from by to)
> >> >  (loop with *read-default-float-format* = 'double-float
> >> >        for i from from to to by by
> >> >        collect i))
> >> 
> >> Did you really mean this seriously (I didn't see a smiley)?
> >> *read-default-float-format* only affects I/O.  By the time this function is
> >> run, the numbers will have been read in already.  And when the function's
> >> return value is printed by the read-eval-print loop, the variable binding
> >> will have been restored.
> >
> >I didn't really test it or think about it. The binding doesn't matter
> >when printing the result. It's only important that it is in effect
> >when you first read the floats because during the computation you
> >need:
> 
> But it *isn't* in effect when you first read the floats.  The floats are
> read when you type (seq 0 0.1 0.7), and the variable isn't bound until this
> is evaluated.

Yes, I thought I was clear about that in my reply:

  > Even if the floats were read as doubles, I didn't do it right anyways,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Christopher
From: Marco Antoniotti
Subject: Re: arithmetic
Date: 
Message-ID: <lw7lk753hi.fsf@parades.rm.cnr.it>
······@2xtreme.net (Christopher R. Barry) writes:


> (defun seq (from by to)
>   (loop with *read-default-float-format* = 'double-float
>         for i from from to to by by
>         collect i))

Sorry, :) but I do not see any READ in your code :)

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Tim Bradshaw
Subject: Re: arithmetic
Date: 
Message-ID: <ey3so2x2p8h.fsf@lostwithiel.tfeb.org>
* H Tunc Simsek wrote:
  
> or use integer arguments.  In all cases it'd be nice to have teh
> generated
> sequence be of the same type as the arguments (or the type suggested by
> them).
> Is there a really short way to do this or would we need to dispatch on 
> the argument type?

This will just happen in CL.  This function pretty much does what you
want (it doesn't quite, in the case where the step does not divide the
interval exactly, which will often be the case for floats, you may
miss the last element, this is way too fiddly to get right in a news
article):

	(defun seq (lo step hi)
	  (loop for x from lo to hi by step
	        collect x))

--tim
From: Marco Antoniotti
Subject: Re: arithmetic
Date: 
Message-ID: <lwd7u10zp4.fsf@parades.rm.cnr.it>
Tim Bradshaw <···@tfeb.org> writes:

> * H Tunc Simsek wrote:
>   
> > or use integer arguments.  In all cases it'd be nice to have teh
> > generated
> > sequence be of the same type as the arguments (or the type suggested by
> > them).
> > Is there a really short way to do this or would we need to dispatch on 
> > the argument type?
> 
> This will just happen in CL.  This function pretty much does what you
> want (it doesn't quite, in the case where the step does not divide the
> interval exactly, which will often be the case for floats, you may
> miss the last element, this is way too fiddly to get right in a news
> article):
> 
> 	(defun seq (lo step hi)
> 	  (loop for x from lo to hi by step
> 	        collect x))

I believe Tunc's problem is really a "reader" problem.
I.e. you must test the value of *READ-FLOAT-DEFAULT-FORMAT* to be sure
that your arithmetic works, when you have literal constants in your
code.

Of course you can always make sure that you get doubles where you need
double and single-floats where you need floats by writing 1.0s0 and
1.0d0.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Fernando D. Mato Mira
Subject: Re: arithmetic
Date: 
Message-ID: <38177AF9.D1305B93@iname.com>
#include <stdio.h>

int main()
{
  int b;
  float point_seven = 0.7f;
  float point_one   = 0.1f;
  float point_six   = 0.6f;

  b = point_seven == point_one + point_six;

  printf("0.7: %E 0.1: %E 0.6: %E 0.1+0.6: %E\n", point_seven, point_one,
point_six, point_one + point_six);
  printf("... and the value is %d\n", b);

  return 0;
}

gentoo:matomira$ a.out
0.7: 7.000000E-01 0.1: 1.000000E-01 0.6: 6.000000E-01 0.1+0.6: 7.000000E-01
... and the value is 0

As expected?

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: Marco Antoniotti
Subject: Re: arithmetic
Date: 
Message-ID: <lw3duv537p.fsf@parades.rm.cnr.it>
"Fernando D. Mato Mira" <········@iname.com> writes:

> #include <stdio.h>
> 
> int main()
> {
>   int b;
>   float point_seven = 0.7f;
>   float point_one   = 0.1f;
>   float point_six   = 0.6f;
> 
>   b = point_seven == point_one + point_six;
> 
>   printf("0.7: %E 0.1: %E 0.6: %E 0.1+0.6: %E\n", point_seven, point_one,
> point_six, point_one + point_six);
>   printf("... and the value is %d\n", b);
> 
>   return 0;
> }
> 
> gentoo:matomira$ a.out
> 0.7: 7.000000E-01 0.1: 1.000000E-01 0.6: 6.000000E-01 0.1+0.6: 7.000000E-01
> ... and the value is 0
> 
> As expected?

"As expected", in the sense that C and CL behave in the same way.  You
may say that the CL printer (at least CMUCL) is "more faithful" to the
SINGLE-FLOAT internal representation.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Fernando D. Mato Mira
Subject: Re: arithmetic
Date: 
Message-ID: <38183907.E64A919C@iname.com>
Marco Antoniotti wrote:

> "As expected", in the sense that C and CL behave in the same way.  You
> may say that the CL printer (at least CMUCL) is "more faithful" to the
> SINGLE-FLOAT internal representation.

Rhetorical question. (What does this say about numerical C code debugging?)

--
((( DANGER )) LISP BIGOT (( DANGER )) LISP BIGOT (( DANGER )))

Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1                   email: matomira AT acm DOT org
CH-2007 Neuchatel                 tel:       +41 (32) 720-5157
Switzerland                       FAX:       +41 (32) 720-5720

www.csem.ch      www.vrai.com     ligwww.epfl.ch/matomira.html
From: H. Tunc Simsek
Subject: Re: arithmetic
Date: 
Message-ID: <3816A8B6.6BC76ED1@EECS.Berkeley.Edu>
Chuck Fry wrote:
> 
> In article <·················@EECS.Berkeley.Edu>,
> H. Tunc Simsek <······@EECS.Berkeley.Edu> wrote:
> >Chuck Fry wrote:
> >> In article <·················@EECS.Berkeley.Edu>,
> >> Hidayet Tunc Simsek  <······@EECS.Berkeley.Edu> wrote:
> >> >* (+ 0.6 0.1)
> >> >
> >> >0.70000005
> >> >
> >> >
> >> >Where does the .00000005 come from?  This accumulates into an undesired
> >> >result.
> >>
> >> JonL White presented a paper which mentioned this very topic at the most
> >> recent Lisp Users Group meeting.  In a nutshell, it is the unavoidable
> >> result of attempting to express decimal fractions as binary fractions.
> >> The internal representations of the original numbers aren't exactly
> >> equal to the exact decimal fractions, nor can these numbers be exactly
> >> represented in a finite number of bits as a binary fraction.
> >>
> >> The printer may also be adding to the confusion by not accurately
> >> representing the result of the addition.  This was also mentioned in
> >> JonL's presentation.
> >>
> >> The addition itself is accurate.
> >>
> >
> >I don't think the addition is accurate:
> >
> >* (= 0.7 (+ 0.6 0.1))
> >
> >NIL
> 
> The primary problem is that the *operands* are, unavoidably,
> inaccurately represented as binary floating point numbers.  And yes,
> that includes the "0.7" to which you compare the result.
> 
> I will admit you are right about the addition; there is likely some
> rounding error in the addition, because of the disparity in magnitudes
> between the operands, and the need to represent the result in a fixed
> width format.  But the major source of the error is the fact that none
> of these numbers can be exactly represented as a binary fraction.
> 
> >> For what it's worth, I get the same exact result in Allegro CL 5.0 on my
> >> Sparc.
> >
> >My Windows Allegro doesn't suffer this problem, at least for this
> >particular
> >example:
> >
> >
> >> (= 0.7 (+ 0.6 0.1))
> >T
> >> (lisp-implementation-type)
> >"Allegro CL for Windows"
> >> (lisp-implementation-version)
> >"3.0.2 Release: 15-Jan-97 18:56"
> 
> That is a completely different code base from ACL 5.0x, and it may use a
> different floating point representation.  What is the value of the
> parameter *READ-DEFAULT-FLOAT-FORMAT* in this environment?

	I have the following on Windows Allegro 3.01:

	> *read-default-float-format*

	SINGLE-FLOAT

> 
> >The entire idea that there is not much one can do about this just
> >doesn't make sense.
> 
> It makes sense once you realize that 0.1, 0.6, and 0.7 in decimal are
> all infinitely repeating fractions when accurately converted to binary
> representations, and that there is bound to be some rounding error when
> those fractions are represented as 24 bit mantissas in IEEE single
> precision format.  Do the conversion by hand and the problem will become
> obvious.
> 
> >I've used C and Matlab over the years and such a problem never caught my
> >attention,
> >at least when dealing with an accuracy of 0.1.
> 
> For all I know Matlab uses rational numbers instead of floats, which
> representation completely avoids the problem.  (This is why Common Lisp
> has rationals, BTW, as Macsyma needed them.)

That's interesting, I didn't know that. I though Matlab used double.


> 
> As for C, you either didn't look closely, or were using double floats
> (which reduce the magnitude of the problem but don't eliminate it), or
> you weren't printing the results to their full precision.  This problem
> is unavoidable in any language.

I was prob. using double most of the time.


In any case, I think that single floats are enough to represent 
arithmetic at a precision of 0.1.  I do understand what you're saying
though.


> 
>  -- Chuck
> --
>             Chuck Fry -- Jack of all trades, master of none
>  ······@chucko.com (text only please)  ········@home.com (MIME enabled)
> Lisp bigot, car nut, photographer, sometime guitarist and mountain biker
> The addresses above are real.  All spammers will be reported to their ISPs.
From: Jens Kilian
Subject: Re: arithmetic
Date: 
Message-ID: <sfn1t5qhj3.fsf@bstde026.bbn.hp.com>
"H. Tunc Simsek" <······@EECS.Berkeley.Edu> writes:
> In any case, I think that single floats are enough to represent 
> arithmetic at a precision of 0.1.  I do understand what you're saying
> though.

They are.  If you round properly.  Never trust an ASCII representation of a
float unless you understand how it was produced; never compare two floats
unless you have rounded both to the desired precision, using identical rules.

There is a paper floating around on the web which discusses this topic; its
name is something like "Things every programmer should know about floating
point numbers."

HTH,
	Jens.
-- 
··········@acm.org                 phone:+49-7031-14-7698 (HP TELNET 778-7698)
  http://www.bawue.de/~jjk/          fax:+49-7031-14-7351
PGP:       06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: Christopher Browne
Subject: Re: arithmetic
Date: 
Message-ID: <ftOR3.56986$y45.657109@news4.giganews.com>
On 27 Oct 1999 06:36:25 GMT, Chuck Fry <······@best.com> wrote:
>In article <·················@EECS.Berkeley.Edu>,
>H. Tunc Simsek <······@EECS.Berkeley.Edu> wrote:
>>I don't think the addition is accurate:
>>
>>* (= 0.7 (+ 0.6 0.1))
>>
>>NIL
>
>The primary problem is that the *operands* are, unavoidably,
>inaccurately represented as binary floating point numbers.  And yes,
>that includes the "0.7" to which you compare the result.


Expressed in binary, 0.7 decimal is equivalent to the continued
fraction with (binary) digits after the decimal: 
 (1 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 ...)

(with repeating 0 0 1 1 sequences after that)

0.6 has, as binary digits representing the fraction, the sequence:
 (1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 ...)

Since only a limited number of binary digits are *kept,* there is
inherently some rounding off that takes place.

This may be fine if the fractions "finish off" (e.g. - "start
repeating") identically; if they *don't* repeat quite the same,
subtracting them will result in the "last bit" being off a bit (and
there be some fun puns there...) much to the chagrin of any
comparisons that expect exactness.

If any further inexactness comes in from other sources, the results
will Just Plain Suffer.

This is why you don't use FP if you are truly looking for exact
results; this is why it is arguable that anyone that plans to
seriously use results of non-trivial FP calculations probably needs to
have some equivalent to a university course in numerical analysis, as
the theoretical ways that addition/subtraction/multiplication/division
are expected to work on real numbers are only *approximately*
equivalent to what these operations do to FP values.
-- 
"Sigh.  I like to think it's just the Linux people who want to be on the
`leading edge' so bad they walk right off the precipice."  
-- Craig E. Groeschel
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/langlisp.html>
From: Patrick A. O'Donnell
Subject: Re: arithmetic
Date: 
Message-ID: <vytso2vsa4s.fsf@ai.mit.edu>
······@best.com (Chuck Fry) writes:
> In article <·················@EECS.Berkeley.Edu>,
> H. Tunc Simsek <······@EECS.Berkeley.Edu> wrote:
> >I don't think the addition is accurate:
> >
> >* (= 0.7 (+ 0.6 0.1))
> >
> >NIL
> 
> The primary problem is that the *operands* are, unavoidably,
> inaccurately represented as binary floating point numbers.  And yes,
> that includes the "0.7" to which you compare the result.
> 
> I will admit you are right about the addition; there is likely some
> rounding error in the addition, because of the disparity in magnitudes
> between the operands, and the need to represent the result in a fixed
> width format.  But the major source of the error is the fact that none
> of these numbers can be exactly represented as a binary fraction.

Indeed.  integer-decode-float can be helpful to see what is going on.
As can be seen from the interaction below, both 0.6 and 0.1 must be
rounded up when converting to the single-float precision.  The errors
are (approximately) 2.98e-8 and 1.86e-8, respectively.  So, the
addition reported above is actually adding two numbers that are
slightly larger than expected.  The sum of the errors is more than 1/2
(3.17e-8) of the lowest bit in the sum's representation (5.96e-8), so
the result is rounded up, too.

		- Pat


Allegro CL Enterprise Edition 5.0.1 [SPARC] (6/29/99 16:47)
Copyright (C) 1985-1999, Franz Inc., Berkeley, CA, USA.  All Rights Reserved.
;; Optimization settings: safety 1, space 1, speed 1, debug 2.
;; For a complete description of all compiler switches given the
;; current optimization settings evaluate (EXPLAIN-COMPILER-SETTINGS).
USER(1): (integer-decode-float 0.6)
10066330
-24
1
USER(2): (write * :base 2)
100110011001100110011010
10066330
USER(3): (integer-decode-float 0.1)
13421773
-27
1
USER(4): (write * :base 2)
110011001100110011001101
13421773
USER(5): (write (integer-decode-float 0.6d0) :base 2)
10011001100110011001100110011001100110011001100110011
5404319552844595
USER(6): (+ (ash 10066330 3) 13421773)
93952413
USER(7): (write * :base 2)
101100110011001100110011101
93952413
USER(8): (write (integer-decode-float 0.7) :base 2)
101100110011001100110011
11744051
USER(9): (write (integer-decode-float (+ 0.6 0.1)) :base 2)
101100110011001100110100
11744052
USER(10): 


USER(14): (scale-float (float (- #b10000 #b01100) 0.0d0) -27)
2.980232238769531d-8
USER(15): (scale-float (float (- #b01000 #b00110) 0.0d0) -30)
1.862645149230957d-9
USER(16): (+ * **)
3.166496753692627d-8
USER(17): (- (+ 0.6 0.1) 0.7)
5.9604645e-8
USER(18): single-float-epsilon
1.1920929e-7
USER(19): (/ * 2)
5.9604645e-8
From: ···@cc.gatech.edu
Subject: Re: arithmetic
Date: 
Message-ID: <m3k8o53fou.fsf@chaos.resnet.gatech.edu>
······@best.com (Chuck Fry) writes:

> 
> >I've used C and Matlab over the years and such a problem never caught my
> >attention,
> >at least when dealing with an accuracy of 0.1.
> 
> For all I know Matlab uses rational numbers instead of floats, which
> representation completely avoids the problem.  (This is why Common Lisp
> has rationals, BTW, as Macsyma needed them.)



Matlab is big on number-crunching, so it probably uses regular binary
floating point.

However, it might round off the answer so that binary-induced oddities
don't show up.


Lex
From: Marius Vollmer
Subject: Re: arithmetic
Date: 
Message-ID: <877lk5ax2u.fsf@zagadka.ping.de>
···@cc.gatech.edu writes:

> Matlab is big on number-crunching, so it probably uses regular binary
> floating point.

Yes, it just uses the C "double" type.
From: Christopher R. Barry
Subject: Re: arithmetic
Date: 
Message-ID: <87k8o4u26x.fsf@2xtreme.net>
······@best.com (Chuck Fry) writes:

> different floating point representation.  What is the value of the
> parameter *READ-DEFAULT-FLOAT-FORMAT* in this environment?

ANSI CL requires that *READ-DEFAULT-FLOAT-FORMAT* always be
SINGLE-FLOAT in the standard readtable. Unless he or a program
modifies it explicitly, that's what it will be.

  Variable                     Value                               
  *package*                    The CL-USER package                 
  *print-array*                t                                   
  *print-base*                 10                                  
  *print-case*                 :upcase                             
  *print-circle*               nil                                 
  *print-escape*               t                                   
  *print-gensym*               t                                   
  *print-length*               nil                                 
  *print-level*                nil                                 
  *print-lines*                nil                                 
  *print-miser-width*          nil                                 
  *print-pprint-dispatch*      The standard pprint dispatch table  
  *print-pretty*               nil                                 
  *print-radix*                nil                                 
  *print-readably*             t                                   
  *print-right-margin*         nil                                 
  *read-base*                  10                                  
  *read-default-float-format*  single-float                        
  *read-eval*                  t                                   
  *read-suppress*              nil                                 
  *readtable*                  The standard readtable

Christopher
From: Tim Bradshaw
Subject: Re: arithmetic
Date: 
Message-ID: <ey3vh7t2rah.fsf@lostwithiel.tfeb.org>
* H Tunc Simsek wrote:


> My Windows Allegro doesn't suffer this problem, at least for this
> particular 
> example:

The old Windows Allegro is a completely different code base than 5.,
and only had one float format, which I believe may have been double
float (I don't have an implementation or documentation handy to
check).

> The entire idea that there is not much one can do about this just
> doesn't make sense.
> I've used C and Matlab over the years and such a problem never caught my
> attention,
> at least when dealing with an accuracy of 0.1.

Indeed it does make sense, if you want to use machine floats, rather
than (say) using ratios of arbitrary precision integers.
Floating-point arithmetic as performed by the silicon just has these
limits.

You don't, by default, see it in C because the default %f conversion
doesn't print enough precision.  I don't know enough about Matlab to
say.  It's still there in C though, try this program:

    #include <stdio.h>

    main ()
    {
	float x = 0.6, y = 0.1;
	printf("%1.10f\n", x+y);
	exit(0);
    }

I have a lisp system (Genera) which prints particularly good
descriptions of floats, and uses (like almost all modern machines)
ieee floats. Here are the results of asking it some things about the
numbers & operations here:

~ (describe 0.6)
0.6 is a single-precision floating-point number.
  Sign 0, exponent 176, 23-bit fraction 06314632  (not including hidden bit)
  Its exact decimal value is 0.60000002384185791015625

~ (describe 0.1)
0.1 is a single-precision floating-point number.
  Sign 0, exponent 173, 23-bit fraction 23146315  (not including hidden bit)
  Its exact decimal value is 0.100000001490116119384765625

~ (+ 0.6 0.1)
0.70000005

~ (describe *)
0.70000005 is a single-precision floating-point number.
  Sign 0, exponent 176, 23-bit fraction 14631464  (not including hidden bit)
  Its exact decimal value is 0.7000000476837158203125

~ (describe 0.6d0)
0.6d0 is a double-precision floating-point number.
  Sign 0, exponent 1776, 52-bit fraction 031463146314631463  (not including hidden bit)
  Its exact decimal value is 0.59999999999999997779553950749686919152736663818359375d0

~ (describe 0.1d0)
0.1d0 is a double-precision floating-point number.
  Sign 0, exponent 1773, 52-bit fraction 114631463146314632  (not including hidden bit)
  Its exact decimal value is 0.1000000000000000055511151231257827021181583404541015625d0

~ (+ 0.6d0 0.1d0)
0.7d0

~ (describe *)
0.7d0 is a double-precision floating-point number.
  Sign 0, exponent 1776, 52-bit fraction 063146314631463146  (not including hidden bit)
  Its exact decimal value is 0.6999999999999999555910790149937383830547332763671875d0

~ (float (+ 6/10 1/10))
0.7

~ (describe *)
0.7 is a single-precision floating-point number.
  Sign 0, exponent 176, 23-bit fraction 14631463  (not including hidden bit)
  Its exact decimal value is 0.699999988079071044921875

~ (float (+ 1/10 6/10) 1.0d0)
0.7d0

~ (describe *)
0.7d0 is a double-precision floating-point number.
  Sign 0, exponent 1776, 52-bit fraction 063146314631463146  (not including hidden bit)
  Its exact decimal value is 0.6999999999999999555910790149937383830547332763671875d0
From: Mike McDonald
Subject: Symbolics arithmetic (Was: arithmetic)
Date: 
Message-ID: <nZGR3.1240$As1.94467@typhoon.aracnet.com>
In article <···············@lostwithiel.tfeb.org>,
	Tim Bradshaw <···@tfeb.org> writes:

> I have a lisp system (Genera) which prints particularly good
> descriptions of floats, and uses (like almost all modern machines)
> ieee floats. Here are the results of asking it some things about the
> numbers & operations here:
> 
> ~ (describe 0.6)
> 0.6 is a single-precision floating-point number.
>   Sign 0, exponent 176, 23-bit fraction 06314632  (not including hidden bit)
>   Its exact decimal value is 0.60000002384185791015625

  Hmm, how does Genera know that you typed in 0.6 and not
0.60000002384185791015625? Somehow DESCRIBE got ahold of the exact value even
though it claims it got the floating point number. Is DESCRIBE rounding things
off?

  Mike McDonald
  ·······@mikemac.com
From: Barry Margolin
Subject: Re: Symbolics arithmetic (Was: arithmetic)
Date: 
Message-ID: <5zJR3.82$uC6.3571@burlma1-snr2>
In article <····················@typhoon.aracnet.com>,
Mike McDonald <·······@mikemac.com> wrote:
>In article <···············@lostwithiel.tfeb.org>,
>	Tim Bradshaw <···@tfeb.org> writes:
>> ~ (describe 0.6)
>> 0.6 is a single-precision floating-point number.
>>   Sign 0, exponent 176, 23-bit fraction 06314632  (not including hidden bit)
>>   Its exact decimal value is 0.60000002384185791015625
>
>  Hmm, how does Genera know that you typed in 0.6 and not
>0.60000002384185791015625? Somehow DESCRIBE got ahold of the exact value even
>though it claims it got the floating point number. Is DESCRIBE rounding things
>off?

It doesn't.  You would get the same output if you entered:

(describe 0.60000002384185791015625)

0.6 is the shortest printed representation that when read in will result in
the same internal representation, so this is used as the printed
representation for that number.  I suggest you read the paper on printing
floating point numbers, I think by Guy Steele and/or Richard Gabriel.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: R. Matthew Emerson
Subject: Re: Symbolics arithmetic (Was: arithmetic)
Date: 
Message-ID: <874sfcbjfd.fsf@nightfly.apk.net>
Barry Margolin <······@bbnplanet.com> writes:

> It doesn't.  You would get the same output if you entered:
> 
> (describe 0.60000002384185791015625)
> 
> 0.6 is the shortest printed representation that when read in will result in
> the same internal representation, so this is used as the printed
> representation for that number.  I suggest you read the paper on printing
> floating point numbers, I think by Guy Steele and/or Richard Gabriel.

Apparently, the proceedings of the 1990 ACM Conference on Programming
Language Design and Implementation is the thing to have in hand
when you want to know about printing and reading floating-point numbers.

These proceedings include a paper by William Clinger, "How to read
floating-point numbers accurately," and also a paper by Guy L. Steele Jr
and Jon L White, "How to print floating-point numbers accurately."

(Jon L White talked about this topic a bit at the recent
Lisp User Group meeting, and cited both these references in
his paper in the LUGM proceedings.)

-matt
From: Lars Lundback
Subject: Re: arithmetic
Date: 
Message-ID: <3816F85A.72722615@eralslk.ericsson.se>
Tim Bradshaw wrote:
> 
> 
> > The entire idea that there is not much one can do about this just
> > doesn't make sense.
> > I've used C and Matlab over the years and such a problem never caught my
> > attention,
> > at least when dealing with an accuracy of 0.1.
> 
> Indeed it does make sense, if you want to use machine floats, rather
> than (say) using ratios of arbitrary precision integers.
> Floating-point arithmetic as performed by the silicon just has these
> limits.
> 
> You don't, by default, see it in C because the default %f conversion
> doesn't print enough precision.  I don't know enough about Matlab to
> say.  It's still there in C though, try this program:
> 
>     #include <stdio.h>
> 
>     main ()
>     {
>         float x = 0.6, y = 0.1;
>         printf("%1.10f\n", x+y);
>         exit(0);
>     }
> 
> ......


Yes, this matter of comparing numbers is interesting. To begin with, Lisp is of
course the superior language, since C etc languages only cover the simple
integer types. For anything else, you use separate libraries and include files.

But back to comparing float numbers. I feel that the comparison methods in this
thread do not tell the whole story. One example is: You want to check if two
values are "the same". Which, then, is the smallest delta, such that any two
values, in a given value range, may confidently be tested for "sameness"? In an
application, you would probably compute this delta and use it as a constant. We
did that in CAD development, when double floats were expensive, and I'm sure
applications in other fields did too. Using operators like '==' (C) or '='
(Lisp) do not suffice, and sometimes not even the system-supplied constants.

Regards, Lars Lundback


PS, More on the superiority of CL:

There is an impressive set of numeric constants, built into the language and
explained in the spec. Where can one find the same in other *languages*? For a
single float, there is (from CL HyperSpec):

most-negative-single-float             most-positive-single-float
least-negative-single-float            least-positive-single-float
least-negative-normalized-single-float least-positive-normalized-single-float
single-float-negative-epsilon          single-float-epsilon  

The Glossary, in CL HyperSpec even explains "normalized":

"denormalized adj., ANSI, IEEE (of a float) conforming to the
 description of ``denormalized'' as described by IEEE Standard 
 for Binary Floating-Point Arithmetic. For example, in an
 implementation where the minimum possible exponent was -7 but
 where 0.001 was a valid mantissa, the number 1.0e-10 might be
 representable as 0.001e-7 internally even if the normalized
 representation would call for it to be represented instead as
 1.0e-10 or 0.1e-9. By their nature, denormalized floats generally
 have less precision than normalized floats. "

And so on. These values were obtained from CMUCL (disregard the less significant
digits in the printout) to show some limits:

* least-positive-single-float
1.4012985e-45

* least-positive-normalized-single-float
1.17549434e-38

* single-float-negative-epsilon
2.9802325e-8

* single-float-epsilon
5.960465e-8

Then compare against the meagre data from a C include file "limits.h" (there are
doubtless better C include files somewhere. There are two files, one from GNU
gcc and one from Sun, on my old Sun Sparc. Thet do not even have the same
filenames. Ickkk):

#define	FLT_DIG	6                /* digits of precision of a "float" */
#define	FLT_MAX	3.402823466E+38F /* max decimal value of a "float" */
#define	FLT_MIN	1.175494351E-38F /* min decimal value of a "float" */
From: Howard R. Stearns
Subject: Re: arithmetic
Date: 
Message-ID: <381716C8.6438E38F@elwood.com>
Lars' and Tim's comments are worth emphasizing.  I confess that I have
never truly understood "numbers" and "arithmetic", but as an engineering
rule of thumb:

1. Never compare floating point numbers with =.
2. Never trust the printer. Doubt the reader, too.(...unless you REALLY
understand it. I don't. Was it Steele or Waters that wrote "THE"
floating point printer paper?)

If you must compare two floating point numbers:

(< (abs (- 0.7f0 (+ 0.6f0 0.1f0)))
   single-float-epsilon)
=> T  ;I'm deliberately avoid the issues of normalization, etc.

By the way, Scheme, which tries to "do things right", has an explicit
concept of "exact" and "inexact" numbers.  I find it surprising that it
bows to efficiency in a particularly useless place by saying in the
spec:

  Note: While it is not an error to compare inexact numbers using these
predicates,   the results may be unreliable because
  a small inaccuracy may affect the result; this is especially true of =
and zero?.   When in doubt, consult a numerical analyst. 

I don't know why they just didn't define = on inexact numbers to compare
against an epsilon.  (How "New Jersey" of them!  (In-joke, just forget
it.))


Lars Lundback wrote:
> 
> Tim Bradshaw wrote:
> >
> >
> > > The entire idea that there is not much one can do about this just
> > > doesn't make sense.
> > > I've used C and Matlab over the years and such a problem never caught my
> > > attention,
> > > at least when dealing with an accuracy of 0.1.
> >
> > Indeed it does make sense, if you want to use machine floats, rather
> > than (say) using ratios of arbitrary precision integers.
> > Floating-point arithmetic as performed by the silicon just has these
> > limits.
> >
> > You don't, by default, see it in C because the default %f conversion
> > doesn't print enough precision.  I don't know enough about Matlab to
> > say.  It's still there in C though, try this program:
> >
> >     #include <stdio.h>
> >
> >     main ()
> >     {
> >         float x = 0.6, y = 0.1;
> >         printf("%1.10f\n", x+y);
> >         exit(0);
> >     }
> >
> > ......
> 
> Yes, this matter of comparing numbers is interesting. To begin with, Lisp is of
> course the superior language, since C etc languages only cover the simple
> integer types. For anything else, you use separate libraries and include files.
> 
> But back to comparing float numbers. I feel that the comparison methods in this
> thread do not tell the whole story. One example is: You want to check if two
> values are "the same". Which, then, is the smallest delta, such that any two
> values, in a given value range, may confidently be tested for "sameness"? In an
> application, you would probably compute this delta and use it as a constant. We
> did that in CAD development, when double floats were expensive, and I'm sure
> applications in other fields did too. Using operators like '==' (C) or '='
> (Lisp) do not suffice, and sometimes not even the system-supplied constants.
> 
> Regards, Lars Lundback
> 
> PS, More on the superiority of CL:
> 
> There is an impressive set of numeric constants, built into the language and
> explained in the spec. Where can one find the same in other *languages*? For a
> single float, there is (from CL HyperSpec):
> 
> most-negative-single-float             most-positive-single-float
> least-negative-single-float            least-positive-single-float
> least-negative-normalized-single-float least-positive-normalized-single-float
> single-float-negative-epsilon          single-float-epsilon
> 
> The Glossary, in CL HyperSpec even explains "normalized":
> 
> "denormalized adj., ANSI, IEEE (of a float) conforming to the
>  description of ``denormalized'' as described by IEEE Standard
>  for Binary Floating-Point Arithmetic. For example, in an
>  implementation where the minimum possible exponent was -7 but
>  where 0.001 was a valid mantissa, the number 1.0e-10 might be
>  representable as 0.001e-7 internally even if the normalized
>  representation would call for it to be represented instead as
>  1.0e-10 or 0.1e-9. By their nature, denormalized floats generally
>  have less precision than normalized floats. "
> 
> And so on. These values were obtained from CMUCL (disregard the less significant
> digits in the printout) to show some limits:
> 
> * least-positive-single-float
> 1.4012985e-45
> 
> * least-positive-normalized-single-float
> 1.17549434e-38
> 
> * single-float-negative-epsilon
> 2.9802325e-8
> 
> * single-float-epsilon
> 5.960465e-8
> 
> Then compare against the meagre data from a C include file "limits.h" (there are
> doubtless better C include files somewhere. There are two files, one from GNU
> gcc and one from Sun, on my old Sun Sparc. Thet do not even have the same
> filenames. Ickkk):
> 
> #define FLT_DIG 6                /* digits of precision of a "float" */
> #define FLT_MAX 3.402823466E+38F /* max decimal value of a "float" */
> #define FLT_MIN 1.175494351E-38F /* min decimal value of a "float" */
From: Barry Margolin
Subject: Re: arithmetic
Date: 
Message-ID: <LYER3.58$uC6.2694@burlma1-snr2>
In article <·················@eralslk.ericsson.se>,
Lars Lundback  <·······@eralslk.ericsson.se> wrote:
>Yes, this matter of comparing numbers is interesting.

Did the original poster ever take any computer science classes, or is he
self-taught?  If he's taken programming classes, I feel bad about the state
of CS education these days.  ISTR that the problems with floating point
were taught in our high school programming classes, and most introductory
programming texts mentioned it.  They had simple exercises in BASIC like:

FOR I = 1 TO 2 BY .1
  PRINT I
END

(I'm pretty sure that's incorrect BASIC syntax -- it's been almost 20 years
since I've used that language) which showed the failure.

The only language I've encountered that tries to hide the problem of
comparing floating point numbers is APL.  It has a "fuzz factor" parameter,
and turns equality comparisons into comparing if the difference is less
than this epsilon.  Most other languages depend on the programmer to do
this explicitly when necessary.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Rolf-Thomas Happe
Subject: Re: arithmetic
Date: 
Message-ID: <r5puxzog04.fsf@bonnie.mathematik.uni-freiburg.de>
Barry Margolin:
> Did the original poster ever take any computer science classes, or is he
> self-taught?  If he's taken programming classes, I feel bad about the state
> of CS education these days.  ISTR that the problems with floating point

Those who want to read up on this matter may consider

    David Goldberg: What every computer scientist should know
    about floating-point arithmetic.
    ACM computing Surveys, Vol. 23, No. 1, March 1991, pp. 5-48

For material (papers, applications) related to arbitrary precision
floating-point arithmetic / adaptive precision arithmetic,
see the home page of J. R. Shewchuk

    http://www.cs.cmu.edu/~jrs/

rthappe
From: David Hanley
Subject: Re: arithmetic
Date: 
Message-ID: <3819CC56.D79678D6@ncgr.org>
Barry Margolin wrote:

> In article <·················@eralslk.ericsson.se>,
> Lars Lundback  <·······@eralslk.ericsson.se> wrote:
> >Yes, this matter of comparing numbers is interesting.
>
> Did the original poster ever take any computer science classes, or is he
> self-taught?  If he's taken programming classes, I feel bad about the state
> of CS education these days.

I don't remember this being mentioned in my undergrad CS classes.

dave
From: Christopher Browne
Subject: Re: arithmetic
Date: 
Message-ID: <ACsS3.42834$7I4.742785@news5.giganews.com>
On Fri, 29 Oct 1999 10:33:26 -0600, David Hanley <···@ncgr.org> wrote:
>Barry Margolin wrote:
>> In article <·················@eralslk.ericsson.se>,
>> Lars Lundback  <·······@eralslk.ericsson.se> wrote:
>> >Yes, this matter of comparing numbers is interesting.
>>
>> Did the original poster ever take any computer science classes, or
>> is he self-taught?  If he's taken programming classes, I feel bad
>> about the state of CS education these days.
>
>I don't remember this being mentioned in my undergrad CS classes. 

I *very* briefly saw it in the two days that I audited the numerical
analysis course.

That course wasn't part of my major, and I (tut, tut) signed up solely
because it was the only way I'd get computer access that term.

I'd be a bit surprised if it's not mandated to at least have a couple
days on this sort of thing in a CS degree.  I've not looked at what's
in the ACM curriculum standards lately; it might be there.
-- 
Rules of the Evil Overlord #45. "I will not turn into a snake. It never
helps." 
<http://www.eviloverlord.com/lists/overlord.html>
········@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Janos Blazi
Subject: Re: arithmetic
Date: 
Message-ID: <accessible-7veatp/INN-2.2.1/allotropic@broadway.news.is-europe.net>
I studied pure math in the early seventies and computer science in the
nineties without coming across c.f. but in some books I read privately. I
think the number of important subjects is simply too large and  no
curriculum can cover EVERYTHING. Also it may depend on the university on
which subjects they put emphasis on. So c.f.s are nice but are not
indispensable.

Christopher Browne <········@news.hex.net> schrieb in im Newsbeitrag:
······················@news5.giganews.com...
> On Fri, 29 Oct 1999 10:33:26 -0600, David Hanley <···@ncgr.org> wrote:
> >Barry Margolin wrote:
> >> In article <·················@eralslk.ericsson.se>,
> >> Lars Lundback  <·······@eralslk.ericsson.se> wrote:
> >> >Yes, this matter of comparing numbers is interesting.
> >>
> >> Did the original poster ever take any computer science classes, or
> >> is he self-taught?  If he's taken programming classes, I feel bad
> >> about the state of CS education these days.
> >
> >I don't remember this being mentioned in my undergrad CS classes.
>
> I *very* briefly saw it in the two days that I audited the numerical
> analysis course.
>
> That course wasn't part of my major, and I (tut, tut) signed up solely
> because it was the only way I'd get computer access that term.
>
> I'd be a bit surprised if it's not mandated to at least have a couple
> days on this sort of thing in a CS degree.  I've not looked at what's
> in the ACM curriculum standards lately; it might be there.
> --
> Rules of the Evil Overlord #45. "I will not turn into a snake. It never
> helps."
> <http://www.eviloverlord.com/lists/overlord.html>
> ········@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Lars Lundback
Subject: Re: arithmetic
Date: 
Message-ID: <381D632F.E5018A11@eralslk.ericsson.se>
David Hanley wrote:
> 
> Barry Margolin wrote:
> 
> > In article <·················@eralslk.ericsson.se>,
> > Lars Lundback  <·······@eralslk.ericsson.se> wrote:
> > >Yes, this matter of comparing numbers is interesting.
> >
> > Did the original poster ever take any computer science classes, or is he
> > self-taught?  If he's taken programming classes, I feel bad about the state
> > of CS education these days.
> 
> I don't remember this being mentioned in my undergrad CS classes.
> 
> dave

Barry's first quote was all right, but since I'm not the original poster, I now
begin to feel that I'm being pushed into the newbeginner's class. I may belong
there, but that is another matter. :-)

Anyhow, I just wanted to point out that an application program should not rely
on a particular float type precision, and should verify that the epsilon chosen
is adequate for the problem at hand, and that it may have to be computed. (Eg:
Lispworks on a PC seems to implement single-float as double-float, whereas using
single-float with CMUCL, would create problems when doing stuff on mechanical
tolerances. Etc.)

This observation may seem esoteric to some (ref to Mr Blazi, in another thread),
and trivial to others (ref to Mr Blazi again, since all Teachers seem to Know
Best?).

Regards, Lars
From: William Deakin
Subject: Re: arithmetic
Date: 
Message-ID: <3816E0F9.E23E2EE@pindar.com>
Chuck Fry wrote:

> >Is there a way to avoid this?
>
> Yes.  Express all your floating point numbers in base 2. :-(

There is alot of stuff in "Numerical Recipies in C" about this. As an aside
there are some IBM boxes (but since I don't have the book to hand I can't tell
you which) that use base 8.

Yours pedantically,

:) will
From: Raymond Toy
Subject: Re: arithmetic
Date: 
Message-ID: <4nzox4d8m5.fsf@rtp.ericsson.se>
>>>>> "William" == William Deakin <·····@pindar.com> writes:

    William> Chuck Fry wrote:
    >> >Is there a way to avoid this?
    >> 
    >> Yes.  Express all your floating point numbers in base 2. :-(

    William> There is alot of stuff in "Numerical Recipies in C" about
    William> this. As an aside there are some IBM boxes (but since I
    William> don't have the book to hand I can't tell you which) that
    William> use base 8.

I don't know about base 8, but I know some IBM boxes (3033 at least)
used base 16 for their floating-point representation.

Ray
From: Barry Margolin
Subject: Re: arithmetic
Date: 
Message-ID: <jGJR3.83$uC6.3571@burlma1-snr2>
In article <··············@rtp.ericsson.se>,
Raymond Toy  <···@rtp.ericsson.se> wrote:
>>>>>> "William" == William Deakin <·····@pindar.com> writes:
>
>    William> Chuck Fry wrote:
>    >> >Is there a way to avoid this?
>    >> 
>    >> Yes.  Express all your floating point numbers in base 2. :-(
>
>    William> There is alot of stuff in "Numerical Recipies in C" about
>    William> this. As an aside there are some IBM boxes (but since I
>    William> don't have the book to hand I can't tell you which) that
>    William> use base 8.
>
>I don't know about base 8, but I know some IBM boxes (3033 at least)
>used base 16 for their floating-point representation.

Honeywell machines also have hex floating point.  However, the only
difference this provides is a larger range of floating point values; the
base refers to the way the exponent is interpreted, i.e. instead of the
number being sign*mantissa*2^exponent, it's sign*mantissa*16^exponent.  In
both cases, the mantissa is a binary number.

The original comment presumably referred to the way people should express
their numbers' *external* representations.  The reason we have problems
like this thread is because people write numbers in base 10, and they look
exact to them, but they're converted to base 2 internally, and not all
decimal fractions have exact representations in base 2.

To see an analogous situation, try using an ordinary decimal calculator to
compute 1/3 + 1/3 + 1/3.  You'll have to enter something like .3333 + .3333
+ .3333, and the result will be .9999 rather than 1.0.

If you want exact arithmetic, Common Lisp provides rational numbers.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Marco Antoniotti
Subject: Re: arithmetic
Date: 
Message-ID: <lwzox33oj1.fsf@parades.rm.cnr.it>
Barry Margolin <······@bbnplanet.com> writes:

> The original comment presumably referred to the way people should express
> their numbers' *external* representations.  The reason we have problems
> like this thread is because people write numbers in base 10, and they look
> exact to them, but they're converted to base 2 internally, and not all
> decimal fractions have exact representations in base 2.
> 
> To see an analogous situation, try using an ordinary decimal calculator to
> compute 1/3 + 1/3 + 1/3.  You'll have to enter something like .3333 + .3333
> + .3333, and the result will be .9999 rather than 1.0.
> 
> If you want exact arithmetic, Common Lisp provides rational numbers.

In CMUCL

* (+ 1/3 1/3 1/3)
1

In CLisp

[1]> (+ 1/3 1/3 1/3)
1

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: Barry Margolin
Subject: Re: arithmetic
Date: 
Message-ID: <NW0S3.117$uC6.5384@burlma1-snr2>
In article <··············@parades.rm.cnr.it>,
Marco Antoniotti  <·······@parades.rm.cnr.it> wrote:
>
>Barry Margolin <······@bbnplanet.com> writes:
>> To see an analogous situation, try using an ordinary decimal calculator to
>> compute 1/3 + 1/3 + 1/3.  You'll have to enter something like .3333 + .3333
>> + .3333, and the result will be .9999 rather than 1.0.
>> 
>> If you want exact arithmetic, Common Lisp provides rational numbers.
>
>In CMUCL
>
>* (+ 1/3 1/3 1/3)
>1
>
>In CLisp
>
>[1]> (+ 1/3 1/3 1/3)
>1

Isn't that what I said?  If you use rational numbers, you get exact
arithmetic.  To see the situation that's analogous to floating point
numbers, you have to use "an ordinary decimal calculator".

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Marco Antoniotti
Subject: Re: arithmetic
Date: 
Message-ID: <lwzox2wkff.fsf@parades.rm.cnr.it>
Barry Margolin <······@bbnplanet.com> writes:

> In article <··············@parades.rm.cnr.it>,
> Marco Antoniotti  <·······@parades.rm.cnr.it> wrote:
> >
> >Barry Margolin <······@bbnplanet.com> writes:
> >> To see an analogous situation, try using an ordinary decimal calculator to
> >> compute 1/3 + 1/3 + 1/3.  You'll have to enter something like .3333 + .3333
> >> + .3333, and the result will be .9999 rather than 1.0.
> >> 
> >> If you want exact arithmetic, Common Lisp provides rational numbers.
> >
> >In CMUCL
> >
> >* (+ 1/3 1/3 1/3)
> >1
> >
> >In CLisp
> >
> >[1]> (+ 1/3 1/3 1/3)
> >1
> 
> Isn't that what I said?  If you use rational numbers, you get exact
> arithmetic.  To see the situation that's analogous to floating point
> numbers, you have to use "an ordinary decimal calculator".

Yes of course.  It was just to emphasize what yoiu said.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa
From: William Deakin
Subject: Re: arithmetic
Date: 
Message-ID: <38180BF0.A86C7D2C@pindar.com>
I agree with what you say.

What I was on about is this: there is a chapter at the end of Numerical Recipies
dedicated to determining the size of the floats, doubles etc and this included
finding out the base the system uses for storing numbers. This base is used to
reduce rounding in certain calculation, such as balancing matrices for use in
eigenvalue/eigenvector calculations, using giggery-pokery with dividing/multiplying
by the value of the base. This can overcome some floating point rounding error.

This is important (as I understand it) is because these types of calculations can
be sensitive to these kind of rounding problem, I believe the term is
"ill-conditioned."

Best Regards,

:) will
From: Christopher Browne
Subject: Re: arithmetic
Date: 
Message-ID: <AtOR3.56989$y45.657109@news4.giganews.com>
On 27 Oct 1999 13:19:30 -0400, Raymond Toy <···@rtp.ericsson.se>
wrote: 
>>>>>> "William" == William Deakin <·····@pindar.com> writes:
>
>    William> Chuck Fry wrote:
>    >> >Is there a way to avoid this?
>    >> 
>    >> Yes.  Express all your floating point numbers in base 2. :-(
>
>    William> There is alot of stuff in "Numerical Recipies in C" about
>    William> this. As an aside there are some IBM boxes (but since I
>    William> don't have the book to hand I can't tell you which) that
>    William> use base 8.
>
>I don't know about base 8, but I know some IBM boxes (3033 at least)
>used base 16 for their floating-point representation.

This doesn't help when you get values that were originally represented
in decimal.  The value 1/10, when turned into binary, becomes a
repeated "binary" fractional value.

This is why Continued Fractions were invented, and why many
Lisps support rational numbers.  

<tangent on weird choices of bases>
I have the notion that it might be attractive to have sort of
equivalent to BCD where the base is, rather than 10, 240.  This has
two notable meritous properties:

a) 240 is nicely divisible by 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 20,
24, ...  which means that there are lots of fractional values that
will reduce nicely without the "repeated decimal" problem.

1/10th (decimal) is represented as the sequence
  ( 24 ) 
and is exact.

1/16th is represented as
  ( 15 )
and is, again, an exact representation.

In effect, it provides an exact representation without repeats for
fractions that have 2, 3, and 5 as factors.

b) 240 fits comfortably into 8 bits, thus meaning that it is
reasonably space-efficient.  (Better than BCD, which can only count up
to 100 in 8 bits.)

The other most interesting option for "base" would be 210, which is
  (* 2 3 5 7)
That wins you the factor of 7, which increases the potential for
having no repeats.

1/10th --> ( 21 )
  which is OK

But 1/16th is represented as the longer sequence
  ( 13 105 )
-- 
>Ever heard of .cshrc?
That's a city in Bosnia.  Right?
(Discussion in comp.os.linux.misc on the intuitiveness of commands.)
········@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: William Deakin
Subject: Re: arithmetic
Date: 
Message-ID: <3817FEDD.7C0A8472@pindar.com>
Christopher Browne wrote:

> <tangent on weird choices of bases> I have the notion that it might be
> attractive to have sort of equivalent to BCD where the base is, rather
> than 10, 240.

I think my Dad would agree with this. His question about metric was always:
why do you think there are 12 inches in a foot, and not 10?

(<tangent about weird builder merchants> He then goes on to tell a story
about a builders merchants in Bristol that invented the metric foot. The
metric foot was the same length as the normal 12" foot (304.8mm, using the
international standard inch of 25.4mm), but was split into 10 metric inches
of 30.48mm...but I think that enough has been said already.)

:) will
From: Clemens Heitzinger
Subject: Re: arithmetic
Date: 
Message-ID: <1e0dx15.1doe23min0f34N%cheitzin@ag.or.at>
Christopher Browne <········@news.hex.net> wrote:

> This doesn't help when you get values that were originally represented
> in decimal.  The value 1/10, when turned into binary, becomes a
> repeated "binary" fractional value.
> 
> This is why Continued Fractions were invented, and why many
> Lisps support rational numbers.  

Pedantically speaking, even the old Greek new continued fractions which
they used to express irrational values (of course -- rational values
don't give continued fractions ;-).

> <tangent on weird choices of bases>
> I have the notion that it might be attractive to have sort of
> equivalent to BCD where the base is, rather than 10, 240.  This has
> two notable meritous properties:
> 
> a) 240 is nicely divisible by 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 20,
> 24, ...  which means that there are lots of fractional values that
> will reduce nicely without the "repeated decimal" problem.

> The other most interesting option for "base" would be 210, which is
>   (* 2 3 5 7)
> That wins you the factor of 7, which increases the potential for
> having no repeats.

Although bases like 210 and 240 have the advantage that one can
represent more fractions without repeats than with, eg, basis 10, there
are still many, many more prime factors one would wish to include.  And
in practice, ie, in numerical analysis, one deals with approximations
all the time (usually the inputs are approximations to start with) and
therefore this more elegant approach has no advantages.

However, if you really deal with rationals, use rationals.

Yours,
Clemens
-- 
Clemens Heitzinger
http://ag.or.at:8000/~clemens
From: Robert Monfera
Subject: Re: arithmetic
Date: 
Message-ID: <381858CE.9FD3FA9D@fisec.com>
Clemens Heitzinger wrote:
...
> Pedantically speaking, even the old Greek new continued fractions which
> they used to express irrational values (of course -- rational values
> don't give continued fractions ;-).

Maybe I'm missing the joke -

(coerce (/ 1 7) 'float)
-> 0.14285714285714285
     ^     ^     ^

Also, is it not true that all continued fractions are rational numbers,
and all rational numbers can be expressed as continued fractions
(however lengthy)?

(As a note, I vaguely remember the Greek calculated pi as (/ 22 3) or
something similar.)

Robert
From: Clemens Heitzinger
Subject: Re: arithmetic
Date: 
Message-ID: <1e0ejcn.1ul79ti1m97ggoN%cheitzin@ag.or.at>
Robert Monfera <·······@fisec.com> wrote:

> Clemens Heitzinger wrote:
> ...
> > Pedantically speaking, even the old Greek new continued fractions which
> > they used to express irrational values (of course -- rational values
> > don't give continued fractions ;-).
> 
> Maybe I'm missing the joke -
> 
> (coerce (/ 1 7) 'float)
> -> 0.14285714285714285
>      ^     ^     ^
>
> Also, is it not true that all continued fractions are rational numbers,
> and all rational numbers can be expressed as continued fractions
> (however lengthy)?

I looked at the last articles and now I see the confusion.  Christopher
talked about decimals and then about continued fractions, but these two
are two entirely different beasts.  I'm sure he meant decimals (in any
basis) all the time, but continued fraction triggered something
(irrelevant in that context) in my brain...

A continued fraction is the function

a_0 + 1/(a_1 + 1/(a_2 + 1/(... + 1/a_N)))

of the N+1 variables a_0,...,a_N, where the a_i are integers >0 and it
is often written as [a_0, a_1, ..., a_N].  More precisely, this is a
finite continued fraction.  There are also infinite continued fractions,
defined as expected.  Every irrational number can be expressed in
exactly one way as an infinite continued fraction; every rational number
can be expressed as a finite continued fraction.  (Of course, I should
have said infinite continued fractions in the paragraph quoted above.)

So the Greek "invented" continued fractions and knew, eg, the
representation of (sqrt 2): [1,2,2,2,2,2,...] and (sqrt 5):
[2,4,4,4,4,4,...].  One story is, they wanted to express the golden
section as a rational and that led them to continued fractions.

Yours,
Clemens
-- 
Clemens Heitzinger
http://ag.or.at:8000/~clemens
From: Gareth McCaughan
Subject: Re: arithmetic
Date: 
Message-ID: <86puxzunzr.fsf@g.local>
Clemens Heitzinger wrote:

> So the Greek "invented" continued fractions and knew, eg, the
> representation of (sqrt 2): [1,2,2,2,2,2,...] and (sqrt 5):
> [2,4,4,4,4,4,...].

Really? I never knew that. Do you have a reference?

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Clemens Heitzinger
Subject: Re: arithmetic
Date: 
Message-ID: <1e0fgd8.hdx67m1p5hpf4N%cheitzin@ag.or.at>
Gareth McCaughan <················@pobox.com> wrote:

> Clemens Heitzinger wrote:
> 
> > So the Greek "invented" continued fractions and knew, eg, the
> > representation of (sqrt 2): [1,2,2,2,2,2,...] and (sqrt 5):
> > [2,4,4,4,4,4,...].
> 
> Really? I never knew that. Do you have a reference?

G.H. Hardy, E.M. Wright: An Introduction to the Theory of Numbers. 426
pages, ISBN 0-19-853171-0.

Btw, if phi is the golden section, then phi^2+phi-1=0 or phi=1/(1+phi),
and phi=[0,1,1,1,1,1,...].  Nice, isn't it?

-- 
Clemens Heitzinger
http://ag.or.at:8000/~clemens
From: Gareth McCaughan
Subject: Re: arithmetic
Date: 
Message-ID: <86r9idn0rf.fsf@g.local>
Clemens Heitzinger wrote:

> Gareth McCaughan <················@pobox.com> wrote:
> 
>> Clemens Heitzinger wrote:
>> 
>>> So the Greek "invented" continued fractions and knew, eg, the
>>> representation of (sqrt 2): [1,2,2,2,2,2,...] and (sqrt 5):
>>> [2,4,4,4,4,4,...].
>> 
>> Really? I never knew that. Do you have a reference?
> 
> G.H. Hardy, E.M. Wright: An Introduction to the Theory of Numbers. 426
> pages, ISBN 0-19-853171-0.
> 
> Btw, if phi is the golden section, then phi^2+phi-1=0 or phi=1/(1+phi),
> and phi=[0,1,1,1,1,1,...].  Nice, isn't it?

I know about the *mathematics* of continued fractions, and
I've read most of Hardy and Wright. When I checked it the
other day, looking to see if it backed up what you said
about the Greeks knowing about continued fractions, I didn't
find anything much about the history of the subject; certainly
nothing about the Greeks.

Incidentally, most people take phi to be (1+root5)/2, not
(1-root5)/2.

-- 
Gareth McCaughan  ················@pobox.com
sig under construction
From: Shin
Subject: Re: arithmetic
Date: 
Message-ID: <381a9bd9.709999@news.iddeo.es>
On 30 Oct 1999 00:35:48 +0000, Gareth McCaughan <················@pobox.com>
wrote:

: I know about the *mathematics* of continued fractions, and
: I've read most of Hardy and Wright. When I checked it the
: other day, looking to see if it backed up what you said
: about the Greeks knowing about continued fractions, I didn't
: find anything much about the history of the subject; certainly
: nothing about the Greeks.
: 
: Incidentally, most people take phi to be (1+root5)/2, not
: (1-root5)/2.

According to [*] the first steps in the theory of continued fractions
were done in Italy, where Pietro Antonio Cataldi (1548--1626), from
Bologna, expressed square roots using them.

Regards,

-- Shin

[*] Carl. B. Boyer, `A History of Mathematics', John Wiley & Sons.
    (Though what I have here is the Spanish translation.)
From: Clemens Heitzinger
Subject: Re: arithmetic
Date: 
Message-ID: <1e0hi5u.szmivk4fyt7oN%cheitzin@ag.or.at>
Gareth McCaughan <················@pobox.com> wrote:

> I know about the *mathematics* of continued fractions, and
> I've read most of Hardy and Wright. When I checked it the
> other day, looking to see if it backed up what you said
> about the Greeks knowing about continued fractions, I didn't
> find anything much about the history of the subject; certainly
> nothing about the Greeks.

I don't have a "hard" reference for this bit of history.  It was
mentioned in the first or second lecture of "Constructive Analysis"
which I once heard here at the TU.

-- 
Clemens Heitzinger
http://ag.or.at:8000/~clemens
From: Christopher Browne
Subject: Re: arithmetic
Date: 
Message-ID: <146S3.38101$7I4.695370@news5.giganews.com>
On Thu, 28 Oct 1999 21:31:36 +0200, Clemens Heitzinger
<········@ag.or.at> wrote: 
>Robert Monfera <·······@fisec.com> wrote:
>
>> Clemens Heitzinger wrote:
>> ...
>> > Pedantically speaking, even the old Greek new continued fractions which
>> > they used to express irrational values (of course -- rational values
>> > don't give continued fractions ;-).
>> 
>> Maybe I'm missing the joke -
>> 
>> (coerce (/ 1 7) 'float)
>> -> 0.14285714285714285
>>      ^     ^     ^
>>
>> Also, is it not true that all continued fractions are rational numbers,
>> and all rational numbers can be expressed as continued fractions
>> (however lengthy)?
>
>I looked at the last articles and now I see the confusion.  Christopher
>talked about decimals and then about continued fractions, but these two
>are two entirely different beasts.  I'm sure he meant decimals (in any
>basis) all the time, but continued fraction triggered something
>(irrelevant in that context) in my brain...

No, I made momentary reference to continued fractions, which would
obviate the issues, but be a representation that people probably
wouldn't want to use in practice as it's a Variable Sized Data
Structure.

I then went on to using other bases...
-- 
"Problem  solving under linux  has never  been the  circus that  it is
under AIX."  -- Pete Ehlke in comp.unix.aix
········@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Clemens Heitzinger
Subject: Re: arithmetic
Date: 
Message-ID: <1e0fgik.1hljgpe8oxbb4N%cheitzin@ag.or.at>
Christopher Browne <········@news.hex.net> wrote:

> On Thu, 28 Oct 1999 21:31:36 +0200, Clemens Heitzinger
> <········@ag.or.at> wrote: 

> >I looked at the last articles and now I see the confusion.  Christopher
> >talked about decimals and then about continued fractions, but these two
> >are two entirely different beasts.  I'm sure he meant decimals (in any
> >basis) all the time, but continued fraction triggered something
> >(irrelevant in that context) in my brain...
> 
> No, I made momentary reference to continued fractions, which would
> obviate the issues, but be a representation that people probably
> wouldn't want to use in practice as it's a Variable Sized Data
> Structure.
> 
> I then went on to using other bases...

Sorry.

If I remember correctly, continued fractions are very good at
approximating real numbers.  One could represent a real number by a
function which generates "digits" of the continued fraction
representation on demand.  But I don't know how to do arithmetic with
continued fractions -- it's probably not straightforward.  Assuming we
can do arithmetic with continued fractions fairly efficiently, then this
representation might have advantages in some areas.

-- 
Clemens Heitzinger
http://ag.or.at:8000/~clemens
From: Eugene Zaikonnikov
Subject: Re: arithmetic
Date: 
Message-ID: <941188492.697314@lxms.cit.org.by>
Clemens Heitzinger <········@ag.or.at> wrote in message
····································@ag.or.at...
[snip]
> If I remember correctly, continued fractions are very good at
> approximating real numbers.  One could represent a real number by a
> function which generates "digits" of the continued fraction
> representation on demand.  But I don't know how to do arithmetic with
> continued fractions -- it's probably not straightforward.  Assuming we
> can do arithmetic with continued fractions fairly efficiently, then this
> representation might have advantages in some areas.
>
IIRC, HAKMEM has a chapter on continued fractions, including arithmetic.
Though it may be slightly out of date...

--
  Eugene.
From: Christopher Browne
Subject: Re: arithmetic
Date: 
Message-ID: <MgsS3.67944$y45.744454@news4.giganews.com>
On Fri, 29 Oct 1999 12:21:12 +0200, Eugene Zaikonnikov
<······@removeme.cit.org.by> wrote: 
>Clemens Heitzinger <········@ag.or.at> wrote in message
>····································@ag.or.at...
>[snip]
>> If I remember correctly, continued fractions are very good at
>> approximating real numbers.  One could represent a real number by a
>> function which generates "digits" of the continued fraction
>> representation on demand.  But I don't know how to do arithmetic with
>> continued fractions -- it's probably not straightforward.  Assuming we
>> can do arithmetic with continued fractions fairly efficiently, then this
>> representation might have advantages in some areas.
>>
>IIRC, HAKMEM has a chapter on continued fractions, including arithmetic.
>Though it may be slightly out of date...

The parts that have PDP-8 "hacks" may be somewhat out of date; the
math isn't.

<http://www.fredbox.com/~james/hakmem/hakmem.html> is the HTML version
of HAKMEM; the section on continued fractions is
<http://www.fredbox.com/~james/hakmem/cf.html>.

The downside to using continued fractions is twofold:
a) You effectively need to have a sequence of indefinite size, and
b) That sequence is made up of BIGNUMs, themselves of indefinite
   size.

These are both not overly wonderful properties for a would-be
replacement for floating point, which is always a data type of fixed
size.  

There may be varying FP representations, but they're all identically
sized, which is an important property when you need to serialize it to
disk.

The really critical impedance is that FP values have a combination of:
a) Portions that are in base 10 (e.g. - the mantissa), and
b) Portions that are in base 2 (e.g. - the fraction).
This can't be helpful.

If the fraction's base was divisible by the mantissa, this would
eliminate at least the portion of the impedance that relates to
manipulations of the mantissa.

e.g. - in Decimal/Binary floating point, 1/10th = 0.1 winds up being
inexact from the word *go* even though it's an extremely natural value
to represent using the mantissa.

With Decimal/Base (210|240) floating point, that fraction could be
represented *exactly,* as would many *more* decimal values, whether
0.33333, 3.41715, or otherwise.
-- 
I have stopped reading Stephen King novels.  Now I just read C code
instead.
-- Richard A. O'Keefe
········@hex.net- <http://www.hex.net/~cbbrowne/compmath.html>
From: Dave Pearson
Subject: Re: arithmetic
Date: 
Message-ID: <slrn81gs4h.lqg.davep.news@hagbard.demon.co.uk>
On Thu, 28 Oct 1999 10:08:14 -0400, Robert Monfera <·······@fisec.com> wrote:

> (As a note, I vaguely remember the Greek calculated pi as (/ 22 3) or
> something similar.)

I think you're thinking of (/ 22 7).

-- 
Take a look in Hagbard's World: | boxquote.el - "Boxed" text quoting.
http://www.acemake.com/hagbard/ | binclock.el - emacs binary clock.
http://www.hagbard.demon.co.uk/ |  uptimes.el - Record emacs uptimes.
emacs software, including.......| quickurl.el - Recall lists of URLs.
From: Jim Driese
Subject: Re: arithmetic
Date: 
Message-ID: <381CE0BE.5D5E2C64@seanet.com>
This is one of the most interesting Usenet posts I have come across in
recent months.  Thanks for sharing.  If you have any more thoughts as to
whether base 210 or 240 is better, at least one person would like to hear
about it.

Regards

Jim Driese

Christopher Browne wrote:

[deleted]

>
> <tangent on weird choices of bases>
> I have the notion that it might be attractive to have sort of
> equivalent to BCD where the base is, rather than 10, 240.  This has
> two notable meritous properties:
>
> a) 240 is nicely divisible by 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 20,
> 24, ...  which means that there are lots of fractional values that
> will reduce nicely without the "repeated decimal" problem.
>
> 1/10th (decimal) is represented as the sequence
>   ( 24 )
> and is exact.
>
> 1/16th is represented as
>   ( 15 )
> and is, again, an exact representation.
>
> In effect, it provides an exact representation without repeats for
> fractions that have 2, 3, and 5 as factors.
>
> b) 240 fits comfortably into 8 bits, thus meaning that it is
> reasonably space-efficient.  (Better than BCD, which can only count up
> to 100 in 8 bits.)
>
> The other most interesting option for "base" would be 210, which is
>   (* 2 3 5 7)
> That wins you the factor of 7, which increases the potential for
> having no repeats.
>
> 1/10th --> ( 21 )
>   which is OK
>
> But 1/16th is represented as the longer sequence
>   ( 13 105 )
> --
> >Ever heard of .cshrc?
> That's a city in Bosnia.  Right?
> (Discussion in comp.os.linux.misc on the intuitiveness of commands.)
> ········@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
From: Chris Double
Subject: Re: arithmetic
Date: 
Message-ID: <wkbt9lqdkh.fsf@double.co.nz>
Hidayet Tunc Simsek <······@EECS.Berkeley.Edu> writes:

> Where does the .00000005 come from?  This accumulates into an
> undesired result.  Is there a way to avoid this?

There is a discussion about this going on in comp.object at the
moment. Well, not that exactly, but the problems of handling financial
values. It covers your question though. It's under the threads with
the subject:

  How do financial institutes...

and

  Fixed Point Numbers

Or something like that anyway.

Chris.
From: William Deakin
Subject: Re: arithmetic
Date: 
Message-ID: <3816DDAD.DA2E90BC@pindar.com>
Hidayet Tunc Simsek wrote:

> Where does the .00000005 come from? This accumulates into an undesired
> result. Is there a way to avoid this?

Yes, do all your calculations in integers ;)

Best Regards,

:) will
From: Thomas A. Russ
Subject: Re: arithmetic
Date: 
Message-ID: <ymid7tzrpsv.fsf@sevak.isi.edu>
Hidayet Tunc Simsek <······@EECS.Berkeley.Edu> writes:

> * (+ 0.6 0.1)
>  
> 0.70000005
>
> Where does the .00000005 come from?  This accumulates into an undesired
> result.
> Is there a way to avoid this?

(+ 6/10 1/10)   =>  7/10

-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu