From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Counting number of digits
Date: 
Message-ID: <d6u0ve$1go$1@ulric.tng.de>
After reading the thread about cmucl crashing while calculating the
factorial of big numbers I wanted to know out of how many digits some
big numbers consist.

My first solution was
(length (write-to-string (fact 20000)))

I timed it on my machine, running clisp:
CL-USER> (time (length (write-to-string (fact 20000))))

Real time: 3.50504 sec.
Run time: 3.50504 sec.
Space: 304258176 Bytes
GC: 461, GC time: 0.701008 sec.
77338

"Okay" I thought, "it should be possible to count the digits faster".
We don't really need to convert the number to a string and then count
its length.


This brought me to my second solution:
(defun count-digits2 (n &optional (result 0))
   (if (= n 0)
       result
       (count-digits2 (/ (- n (mod n 10)) 10) (1+ result))))
; I know it does not work for n=0, but that doesn't matter here

After compiling it:
CL-USER> (time (count-digits (fact 20000)))

Real time: 69.039276 sec.
Run time: 68.29821 sec.
Space: 5011290728 Bytes
GC: 7600, GC time: 11.867064 sec.
77338


Not really the speedup I wanted to see.
Maybe by iteration instead of recursion?

CL-USER> (defun count-digits (n)
	   (if (zerop n)
	       1
	       (let ((result 0))
		 (loop until (zerop n) do
		       (setf n (/ (- n (mod n 10)) 10))
		       (incf result))
		 result)))

CL-USER> (time (count-digits (fact 20000)))

Real time: 97.62037 sec.
Run time: 95.49732 sec.
Space: 5011290728 Bytes
GC: 7600, GC time: 39.036133 sec.
77338


I give up. Can anyone show a solution that runs more efficient than
(length (write-to-string (fact 20000)))?


Andr�
-- 

From: ······@gmail.com
Subject: Re: Counting number of digits
Date: 
Message-ID: <1116901676.693112.231310@g14g2000cwa.googlegroups.com>
André Thieme wrote:
> After reading the thread about cmucl crashing while calculating the
> factorial of big numbers I wanted to know out of how many digits some
> big numbers consist.
>
> My first solution was
> (length (write-to-string (fact 20000)))
>
> I timed it on my machine, running clisp:
> CL-USER> (time (length (write-to-string (fact 20000))))
>
> Real time: 3.50504 sec.
> Run time: 3.50504 sec.
> Space: 304258176 Bytes
> GC: 461, GC time: 0.701008 sec.
> 77338
>
> "Okay" I thought, "it should be possible to count the digits faster".
> We don't really need to convert the number to a string and then count
> its length.
>
>
> This brought me to my second solution:
> (defun count-digits2 (n &optional (result 0))
>    (if (= n 0)
>        result
>        (count-digits2 (/ (- n (mod n 10)) 10) (1+ result))))
> ; I know it does not work for n=0, but that doesn't matter here
>
> After compiling it:
> CL-USER> (time (count-digits (fact 20000)))
>
> Real time: 69.039276 sec.
> Run time: 68.29821 sec.
> Space: 5011290728 Bytes
> GC: 7600, GC time: 11.867064 sec.
> 77338
>
>
> Not really the speedup I wanted to see.
> Maybe by iteration instead of recursion?
>
> CL-USER> (defun count-digits (n)
> 	   (if (zerop n)
> 	       1
> 	       (let ((result 0))
> 		 (loop until (zerop n) do
> 		       (setf n (/ (- n (mod n 10)) 10))
> 		       (incf result))
> 		 result)))
>
> CL-USER> (time (count-digits (fact 20000)))
>
> Real time: 97.62037 sec.
> Run time: 95.49732 sec.
> Space: 5011290728 Bytes
> GC: 7600, GC time: 39.036133 sec.
> 77338
>
>
> I give up. Can anyone show a solution that runs more efficient than
> (length (write-to-string (fact 20000)))?

I don't know if you will view these as cheating or not but...

* (time (length (write-to-string (fact 20000))))

Evaluation took:
  4.752 seconds of real time
  3.022092 seconds of user run time
  1.085591 seconds of system run time
  0 page faults and
  341,131,848 bytes consed.
77338

* (time (floor (1+ (log (fact 20000) 10))))

Evaluation took:
  3.069 seconds of real time
  1.504822 seconds of user run time
  1.033656 seconds of system run time
  0 page faults and
  339,358,200 bytes consed.
77338
0.2578125

* (time (floor (1+ (loop for i from 1 to 20000 summing (log i 10)))))

Evaluation took:
  0.085 seconds of real time
  0.062878 seconds of user run time
  0.0137 seconds of system run time
  0 page faults and
  3,200,104 bytes consed.
77338
0.0

That last one relies on the identity that log(a * b) = log(a) + log(b).

Justin Dubs
From: André Thieme
Subject: Re: Counting number of digits
Date: 
Message-ID: <d6v6u0$2u1$1@ulric.tng.de>
······@gmail.com schrieb:

> I don't know if you will view these as cheating or not but...

> * (time (floor (1+ (log (fact 20000) 10))))
> 
> Evaluation took:
>   3.069 seconds of real time
>   1.504822 seconds of user run time
>   1.033656 seconds of system run time
>   0 page faults and
>   339,358,200 bytes consed.
> 77338
> 0.2578125

I don't see this solution as cheating. Also for factorials it will work.
There is only one problem with using it for implementing count-digits:
for big numbers it will not work that way:
CL-USER> (log 999998880 10)
8.999999
CL-USER> (log 999998881 10)
9.0

If a big number is close getting a new digit the foating point unit can
give inaccurate results. Changing that to another type will help for
bigger numbers, but not for real big numbers:
CL-USER> (coerce 9999999999999999999 'long-float)
9.999999999999999999L18
CL-USER> (coerce 99999999999999999999 'long-float)
1.0L20

CL-USER> (SETF (EXT:LONG-FLOAT-DIGITS) 250)
250
CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 60 
:initial-element #\9)) 'long-float) 10)))
60
CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 70 
:initial-element #\9)) 'long-float) 10)))
70
CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 80 
:initial-element #\9)) 'long-float) 10)))
81


Okay, I have to admit that this would probably not happen in real life
programs, but at the moment I don't see an easy way how to solve that
problem. Our long-floats should have as many decimal places as the number
has digits.


> * (time (floor (1+ (loop for i from 1 to 20000 summing (log i 10)))))
> 
> Evaluation took:
>   0.085 seconds of real time
>   0.062878 seconds of user run time
>   0.0137 seconds of system run time
>   0 page faults and
>   3,200,104 bytes consed.
> 77338
> 0.0
> 
> That last one relies on the identity that log(a * b) = log(a) + log(b).

Yes, right - really nice!
For factorial it works excellent, I don't even need to calculate it (the
factorial) to get the number of digits.


Andr�
-- 
From: ······@gmail.com
Subject: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <1116946757.907011.8250@g49g2000cwa.googlegroups.com>
André Thieme wrote:
> ······@gmail.com schrieb:
>
> > I don't know if you will view these as cheating or not but...
>
> > * (time (floor (1+ (log (fact 20000) 10))))
> >
> > Evaluation took:
> >   3.069 seconds of real time
> >   1.504822 seconds of user run time
> >   1.033656 seconds of system run time
> >   0 page faults and
> >   339,358,200 bytes consed.
> > 77338
> > 0.2578125
>
> I don't see this solution as cheating. Also for factorials it will
work.
> There is only one problem with using it for implementing
count-digits:
> for big numbers it will not work that way:
> CL-USER> (log 999998880 10)
> 8.999999
> CL-USER> (log 999998881 10)
> 9.0
>
> If a big number is close getting a new digit the foating point unit
can
> give inaccurate results. Changing that to another type will help for
> bigger numbers, but not for real big numbers:
> CL-USER> (coerce 9999999999999999999 'long-float)
> 9.999999999999999999L18
> CL-USER> (coerce 99999999999999999999 'long-float)
> 1.0L20
>
> CL-USER> (SETF (EXT:LONG-FLOAT-DIGITS) 250)
> 250
> CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 60
> :initial-element #\9)) 'long-float) 10)))
> 60
> CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 70
> :initial-element #\9)) 'long-float) 10)))
> 70
> CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 80
> :initial-element #\9)) 'long-float) 10)))
> 81
>
>
> Okay, I have to admit that this would probably not happen in real
life
> programs, but at the moment I don't see an easy way how to solve that
> problem. Our long-floats should have as many decimal places as the
number
> has digits.

I've often wondered why no languages support infinite-precision
floating-point.  Apprently, bignums where deemed important enough to
standardize in Common Lisp, but not bigfloats.  Anyone have an
explanation for this that they'd like to share with me?

It even seems easy to implement.  Just a cons of bignums.  One for the
number and one for the exponent.  They'd be twice as slow as bignums,
but, if you need infinite-precision floating-point, accuracy is
probably more of a concern than speed...

Justin Dubs
From: Joel Ray Holveck
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <y7c8y24xl2m.fsf@sindri.juniper.net>
>> Okay, I have to admit that this would probably not happen in real
>> life programs, but at the moment I don't see an easy way how to
>> solve that problem. Our long-floats should have as many decimal
>> places as the number has digits.
> I've often wondered why no languages support infinite-precision
> floating-point.  Apprently, bignums where deemed important enough to
> standardize in Common Lisp, but not bigfloats.  Anyone have an
> explanation for this that they'd like to share with me?

I'm not sure if you're asserting direct relevance here, but you
couldn't use infinite-precision floating point in this case, since
most of the logs he's computing will be irrational.

I don't really see the point of infinite-precision floating point
anyway.  For rational numbers, we already have the rational datatype,
which can express a proper superset of the numbers that bigfloats[1]
can, and the question of the numeric base used becomes a non-problem.
For irrationals, bigfloats blow up anyway.

Really, your two-bignum proposal for bigfloats would be just the same
as using the rational (/ mantissa (expt 10 exponent)), wouldn't it?

joelh

[1] I'm referring to infinite-precision floats, not Macsyma's
bigfloats.
From: ······@gmail.com
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <1116986451.460292.105050@g47g2000cwa.googlegroups.com>
Joel Ray Holveck wrote:
> >> Okay, I have to admit that this would probably not happen in real
> >> life programs, but at the moment I don't see an easy way how to
> >> solve that problem. Our long-floats should have as many decimal
> >> places as the number has digits.
> > I've often wondered why no languages support infinite-precision
> > floating-point.  Apprently, bignums where deemed important enough to
> > standardize in Common Lisp, but not bigfloats.  Anyone have an
> > explanation for this that they'd like to share with me?
>
> I'm not sure if you're asserting direct relevance here, but you
> couldn't use infinite-precision floating point in this case, since
> most of the logs he's computing will be irrational.
>
> I don't really see the point of infinite-precision floating point
> anyway.  For rational numbers, we already have the rational datatype,
> which can express a proper superset of the numbers that bigfloats[1]
> can, and the question of the numeric base used becomes a non-problem.
> For irrationals, bigfloats blow up anyway.
>
> Really, your two-bignum proposal for bigfloats would be just the same
> as using the rational (/ mantissa (expt 10 exponent)), wouldn't it?

Yeah.  Turns out my infinite-precision floating point idea was pretty
half-baked.

I still think that there is a problem to be solved here.  I'm just not
sure exactly what it is or how to solve it... :-)

During the Dynamic Language Wizards talks, Guy Steele said that when a
computer language does something that has an accepted meaning outside
of computers that the computer should either do that think exactly or
"stay well away from it."  He specifically mentioned that in C 1/2 is
not equal to anything near one half.

Thankfully, CL takes care of that with rationals.  But:

* (+ 0.04 0.01)

0.049999997

Sorry, I just don't see that as acceptable.  It seems to me that by
default all floating-points should have arbitrary precision (apparently
that's the right term for it...).  0.04 + 0.01 is 0.05.  I don't care
how the machine chooses to represent that computation, I just want the
right answer.

If I want speed and don't mind the loss of accuracy, I'll declare the
type to be 32-bit floating-point or whatever representation I want.

If we want Common Lisp to be abstracted away from the underlying
architecture enough that things like pointers, memory management and
integer-length limitations are all hidden and worked-around, why
shouldn't the same be true of floating point?

This leads me to think that the correct behavior of a number system in
the abscence of any type declarations should behave such that:

* (* 2 (sin 30))

2 * sin(30)

* (* (sqrt 2) (sqrt 2))

2

If this means that slow symbolic manipulation is the default, then that
seems acceptable to me.  Most applications are not numerically
intensive.  They can probably work just fine with symbolic
manipulation.  The ones that need more speed can get it by declaring
types.

It just feels so strange for a high-level language to say this:

* (= 0.05 (+ 0.04 0.01))

NIL

Justin Dubs
From: Greg Menke
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <m33bsbj2tx.fsf@athena.pienet>
······@gmail.com writes:

> Joel Ray Holveck wrote:
> 
> Thankfully, CL takes care of that with rationals.  But:
> 
> * (+ 0.04 0.01)
> 
> 0.049999997
> 
> Sorry, I just don't see that as acceptable.  It seems to me that by
> default all floating-points should have arbitrary precision (apparently
> that's the right term for it...).  0.04 + 0.01 is 0.05.  I don't care
> how the machine chooses to represent that computation, I just want the
> right answer.

By talking about using floating point types you're already talking about
how the machine represents the computation.  The problem is mapping
fractions to binary numbers of specifically limited size.

(float (+ (/ (rational 4) (rational 100)) (/ (rational 1) (rational 100))))

gives you the right answer because its selecting a type with different
tradeoffs, and here I am being very careful about how I construct the
terms of the arithmetic.  In effect, I am specifying the ratios for both
terms instead of relying on the reader to construct their
representation.

(rational 0.04) and (rational 0.01) will show you where the rounding
problems start.  

Remember, floating point is designed to supply some maximum number of
decimal places of result and an exponent, so the rounding errors and
imprecision are isolated down in the least significant places.  By
making this tradeoff, the floating point type can be made compact and
quick to process compared to CL's rational type.  This accuracy tradeoff
is quite reasonable for some applications and intractable for others- in
physics arithmetic for example, your expression above might be perfectly
fine- the inaccuracy is some 6 orders of magnitude lower than the values
in question.  In financial arithmetic, this behavior is mortal- some
other type is necessary.

> 
> It just feels so strange for a high-level language to say this:
> 
> * (= 0.05 (+ 0.04 0.01))
> 

Why?  Floating point types have tradeoffs like any other numeric type.
A calculator gets around this kind of problem by applying rounding rules
so things come out "right", which is routinely done in financial systems
anyhow.

Gregm
From: GP lisper
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <1117004403.ade389f9eb09b6f9655a0da7716bc6ad@teranews>
>> Joel Ray Holveck wrote:
>> 
>> * (+ 0.04 0.01)
>> 
>> 0.049999997
>> 
>> Sorry, I just don't see that as acceptable.  It seems to me that by

That's because you don't see it as floating point, _you_ see it as

CL-USER> (+ (/ 4 100) (/ 1 100))


-- 
With sufficient thrust, pigs fly fine.
From: Nicolas Neuss
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87sm0cudx5.fsf@ortler.iwr.uni-heidelberg.de>
······@gmail.com writes:

> * (* 2 (sin 30))
>
> 2 * sin(30)
>
> * (* (sqrt 2) (sqrt 2))
>
> 2
>
> If this means that slow symbolic manipulation is the default, then that
> seems acceptable to me.

What about more difficult algebraic forms?  Should CL solve the halting
problem as well?

Nicolas.
From: Christopher C. Stacy
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <u4qcrpawm.fsf@news.dtpq.com>
······@gmail.com writes:
> * (+ 0.04 0.01)
> 
> 0.049999997
> 
> Sorry, I just don't see that as acceptable.

Acceptable for what purpose?
I suspect that you misunderstand what floating point numbers
are supposed to be used for, and you want some other kind of
number that includes a decimal point.

What application are you trying to program?
From: Bradley J Lucier
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <d7297t$9qv@arthur.cs.purdue.edu>
In article <·············@news.dtpq.com>,
Christopher C. Stacy <······@news.dtpq.com> wrote:
>······@gmail.com writes:
>> * (+ 0.04 0.01)
>> 
>> 0.049999997
>> 
>> Sorry, I just don't see that as acceptable.
>
>Acceptable for what purpose?

I recommend the following post by Fred Gilham:

http://groups-beta.google.com/group/comp.lang.lisp/msg/d452669476b6b4ea?hl=en

I think it explains the issues well.

Brad
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <878y23l74m.fsf@thalassa.informatimago.com>
······@gmail.com writes:
> Thankfully, CL takes care of that with rationals.  But:
>
> * (+ 0.04 0.01)
>
> 0.049999997
>
> Sorry, I just don't see that as acceptable.  

Good idea, prevent users to enter binary floating points in decimal.

Would you feel better if you could type:
(+  #2r0.0001100110011001100 #2r0.0110011001100110011)  ; 0.4 + 0.1
--> #2r0.0111111111111111111
(which you will note is as far from being:
    #2r0.1000000000000000000
as 0.499999 is to 0.5)
?

or
(+ #2r0.0000001010001111010 #2r0.0000101000111101011) ; 0.04 0.01
--> #2r0.0000110011001100110                          ; "0.05"
while 0.05 is actually:
    #2r0.0000110011001100110011001100110011001100110011001100...1100...

Note that (in my example with 19 binary significant digits) 0.04 + 0.01 is:

(loop for d across "0000110011001100110"
      for i = 1/2 then (/ i 2)
      when (char= #\1 d) sum i)
--> 13107/262144
(coerce 13107/262144 'float)
--> 0.049999237
(coerce 13107/262144 'double-float)
--> 0.049999237060546875d0


So 0.049999997 is quite a precise answer given your lazyness to
specify an infinite number of digits...


> It seems to me that by
> default all floating-points should have arbitrary precision (apparently
> that's the right term for it...).  0.04 + 0.01 is 0.05.  I don't care
> how the machine chooses to represent that computation, I just want the
> right answer.

But 0.499999... is exactly the exact answer.  I don't know you, but
I've been teached in primary that 0.9999999... = 1 and that
1=0.99999...  at exactly the same time (� 30 min) I was teached about
rational numbers and that 0.111111... = 1/9, and 0.33333... = 1/3.


> If I want speed and don't mind the loss of accuracy, I'll declare the
> type to be 32-bit floating-point or whatever representation I want.
>
> If we want Common Lisp to be abstracted away from the underlying
> architecture enough that things like pointers, memory management and
> integer-length limitations are all hidden and worked-around, why
> shouldn't the same be true of floating point?


> [...]
> It just feels so strange for a high-level language to say this:
>
> * (= 0.05 (+ 0.04 0.01))

http://docs.sun.com/source/806-3568/ncg_goldberg.html

The correct form is:

(< (abs (- 0.05 (+ 0.04 0.01))) epsilon)

Since you're working with 2 digits of precision, you could (setf epsilon 0.001).


otherwise:

[12]> (in-PACKAGE "COM.INFORMATIMAGO.COMMON-LISP.INVOICE")
#<PACKAGE COM.INFORMATIMAGO.COMMON-LISP.INVOICE>
COM.INFORMATIMAGO.COMMON-LISP.INVOICE[13]> (setf *readtable* *CURRENCY-READTABLE*)
#<READTABLE #x204AD776>
COM.INFORMATIMAGO.COMMON-LISP.INVOICE[14]> (+ #m0.04  #m0.01)
0.05 EUR
COM.INFORMATIMAGO.COMMON-LISP.INVOICE[15]> 



-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
In deep sleep hear sound,
Cat vomit hairball somewhere.
Will find in morning.
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d722mh$hv0$1@ulric.tng.de>
Pascal Bourguignon schrieb:

> But 0.499999... is exactly the exact answer.


0.49999... = 0.5
but
0.49999   != 0.5


Andr�
-- 
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87oeazjq8s.fsf@thalassa.informatimago.com>
Andr� Thieme <······························@justmail.de> writes:

> Pascal Bourguignon schrieb:
>
>> But 0.499999... is exactly the exact answer.
>
>
> 0.49999... = 0.5
> but
> 0.49999   != 0.5

And when you write 0.04, in effect you've written: #2r0.0001100110...1100...
not 0,04.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we. -- Georges W. Bush
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d72c1f$ptl$1@ulric.tng.de>
Pascal Bourguignon schrieb:
> Andr� Thieme <······························@justmail.de> writes:
> 
> 
>>Pascal Bourguignon schrieb:
>>
>>
>>>But 0.499999... is exactly the exact answer.
>>
>>
>>0.49999... = 0.5
>>but
>>0.49999   != 0.5
> 
> 
> And when you write 0.04, in effect you've written: #2r0.0001100110...1100...
> not 0,04.

Both describe a pattern that can be compressed.
0.04 = 0.040000000000000000000000000000000000000000000000000...

So for both, 0.04 and #2r0.0001100110...1100... you would need an
infinite amount of memory if you wanted to save them without compression.
We use by convention a standard compression for numbers that end with
"period zero" by not writing them. So, independent from your number 
system it will always be possible to compress the pattern.


The only real problem are irrational numbers. Only an extremly small
amount of them (countable infinite) have a pattern that we can compress.
Anyway, the other irrational numbers are only fantasy, so that does not
matter.


Andr�
-- 
From: ··············@hotmail.com
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117045517.773650.277470@f14g2000cwb.googlegroups.com>
André Thieme wrote:
> Pascal Bourguignon schrieb:
> > André Thieme <······························@justmail.de> writes:
> >
> >
> >>Pascal Bourguignon schrieb:
> >>
> >>
> >>>But 0.499999... is exactly the exact answer.
> >>
> >>
> >>0.49999... = 0.5
> >>but
> >>0.49999   != 0.5
> >
> >
> > And when you write 0.04, in effect you've written: #2r0.0001100110...1100...
> > not 0,04.
>
> Both describe a pattern that can be compressed.
> 0.04 = 0.040000000000000000000000000000000000000000000000000...

  That doesn't address the base problem. How do you create a practical
implementation that "recognizes" the base-2 periodic decimal (as
opposed to the base-10 finite decimal) in order to compress it? Use
rationals?

  1/3 is a finite 0.1 in base 3, and repeating decimal in base 2 or
base 10.
Yes, it is "compressible" in the sense that we can use rationals with
fixnums to represent it, but that is not floating point computation.
Does your scheme adjust the representation base to make the compression
possible?

>
> So for both, 0.04 and #2r0.0001100110...1100... you would need an
> infinite amount of memory if you wanted to save them without compression.
> We use by convention a standard compression for numbers that end with
> "period zero" by not writing them. So, independent from your number
> system it will always be possible to compress the pattern.

What if this "period zero" only shows up in the 32-billionth place?
After 2^64 places?

>
> The only real problem are irrational numbers. Only an extremly small
> amount of them (countable infinite) have a pattern that we can compress.
> Anyway, the other irrational numbers are only fantasy, so that does not
> matter.

There are still an (infinite) number of rational numbers with
denominators too large to fit into any practical computer memory,
compressed or not. These also cannot be represented. The numbers that
can be represented in a practical computer memory are a quite finite
set, something much less than 256^(2^64) in cardinality. That is quite
far from being even countably infinite.

It is fun to discuss the properties of various sets of numbers and the
infinite series expansions that represent them. The relevance to
practical computing, however, is not clear.
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87br6zjf7w.fsf@thalassa.informatimago.com>
···············@hotmail.com" <············@gmail.com> writes:

> Andr� Thieme wrote:
>> Pascal Bourguignon schrieb:
>> > Andr� Thieme <······························@justmail.de> writes:
>> >
>> >
>> >>Pascal Bourguignon schrieb:
>> >>
>> >>
>> >>>But 0.499999... is exactly the exact answer.
>> >>
>> >>
>> >>0.49999... = 0.5
>> >>but
>> >>0.49999   != 0.5
>> >
>> >
>> > And when you write 0.04, in effect you've written: #2r0.0001100110...1100...
>> > not 0,04.
>>
>> Both describe a pattern that can be compressed.
>> 0.04 = 0.040000000000000000000000000000000000000000000000000...
>
>   That doesn't address the base problem. How do you create a practical
> implementation that "recognizes" the base-2 periodic decimal (as
> opposed to the base-10 finite decimal) in order to compress it? Use
> rationals?

I think users who are used to write (decimal) numbers in base Ten are
expecting the computer to do the computing in base Ten too.

Us scientists prefer the efficiency of binary computers, knowing very
well that base Ten won't earn us anything for the kind of computing we
do, but accountants don't have the same point of view.  For them, BCD
is a must.

One may wonders why an accountant wants to use Lisp instead of COBOL,
but then, we can't regret that Lisp be successfull.

The "hack" mentionned elsewhere is not good for these accountants who
may object that 1/20 is not 0.05. (Just try to write a check for
5091/20 EUR, and tell us how much the bank paid).  So we'll have to
write a full fleshed decimal arithmetic, which would track the
precision of the numbers manipulated.

0.04 + 0.011 --> 0.051
0.04 * 0.01  --> 0.0004


> There are still an (infinite) number of rational numbers with
> denominators too large to fit into any practical computer memory,
> compressed or not. These also cannot be represented. The numbers that
> can be represented in a practical computer memory are a quite finite
> set, something much less than 256^(2^64) in cardinality. That is quite
> far from being even countably infinite.
>
> It is fun to discuss the properties of various sets of numbers and the
> infinite series expansions that represent them. The relevance to
> practical computing, however, is not clear.

One, Two, Three, Much.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

Nobody can fix the economy.  Nobody can be trusted with their finger
on the button.  Nobody's perfect.  VOTE FOR NOBODY.
From: Joel Ray Holveck
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <y7cbr6zht8x.fsf@sindri.juniper.net>
> The "hack" mentionned elsewhere is not good for these accountants who
> may object that 1/20 is not 0.05.

I suspect that it wouldn't be too difficult to write a routine to
print-- in precise decimal notation-- any given rational.

I have no intention of actually doing so, which is why I feel bold
enough to make such claims.

joelh
From: Geoffrey Summerhayes
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <pPnle.17723$Ot6.1211368@news20.bellglobal.com>
"Joel Ray Holveck" <·····@juniper.net> wrote in message 
····················@sindri.juniper.net...
>> The "hack" mentionned elsewhere is not good for these accountants who
>> may object that 1/20 is not 0.05.
>
> I suspect that it wouldn't be too difficult to write a routine to
> print-- in precise decimal notation-- any given rational.
>
> I have no intention of actually doing so, which is why I feel bold
> enough to make such claims.
>

Oh, come on, give it a try!

http://online-judge.uva.es/p/v2/202.html

Unfortunately, they only allow C, C++, Java, and Pascal
solutions to be submitted.

Anyone have a solution that doesn't have an O(denominator)
space requirement? It's a *bit* of a hitch. ;-)

--
Geoff
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87wtplhh74.fsf@thalassa.informatimago.com>
"Geoffrey Summerhayes" <·············@hotmail.com> writes:

> "Joel Ray Holveck" <·····@juniper.net> wrote in message 
> ····················@sindri.juniper.net...
>>> The "hack" mentionned elsewhere is not good for these accountants who
>>> may object that 1/20 is not 0.05.
>>
>> I suspect that it wouldn't be too difficult to write a routine to
>> print-- in precise decimal notation-- any given rational.
>>
>> I have no intention of actually doing so, which is why I feel bold
>> enough to make such claims.
>>
>
> Oh, come on, give it a try!
>
> http://online-judge.uva.es/p/v2/202.html
>
> Unfortunately, they only allow C, C++, Java, and Pascal
> solutions to be submitted.
>
> Anyone have a solution that doesn't have an O(denominator)
> space requirement? It's a *bit* of a hitch. ;-)

Where's the challenge?



(defun divide (num denum)
  (let (p)
    (labels
      ((divide-step
        (num quos rems)
        (cond
         ((setf p (position num rems))
          (loop                         ; adjust the cycles
           with digits = quos
           while (eql (position (first digits) digits :start 1) (1+ p))
           do (pop digits)
           finally (return (values (nreverse (subseq digits (1+ p)))
                                   (nreverse (subseq digits 0 (1+ p)))))))
         (t (multiple-value-bind (quo res) (truncate num denum)
              (divide-step (* 10 res) (cons quo quos) (cons num rems)))))))
      (multiple-value-bind (quo res) (truncate num denum)
          (multiple-value-call
           (function values)
           quo
           (divide-step (* 10 res) '() (if (zerop quo) (list num) '())))))))


(loop for (n d) in '((1 6) (5 7) (1 250) (300 31) (655 990)
                     (76 25) (5 43) (1 397))
      do (multiple-value-bind (quo prefix cycle) (divide n d)
           (format t "~%~D/~D = ~D.~{~D~}(~{~D~}~:[~;...~])~%~
                      ~&   ~D number of digits in repeating cycle~2%"
                   n d quo prefix
                   (subseq cycle 0 (min 50 (length cycle)))
                    (< 50 (length cycle))
                    (length cycle))))



1/6 = 0.1(6)
   1 number of digits in repeating cycle


5/7 = 0.(714285)
   6 number of digits in repeating cycle


1/250 = 0.004(0)
   1 number of digits in repeating cycle


300/31 = 9.(677419354838709)
   15 number of digits in repeating cycle


655/990 = 0.6(61)
   2 number of digits in repeating cycle


76/25 = 3.04(0)
   1 number of digits in repeating cycle


5/43 = 0.(116279069767441860465)
   21 number of digits in repeating cycle


1/397 = 0.(00251889168765743073047858942065491183879093198992...)
   99 number of digits in repeating cycle

NIL
[257]>


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

The world will now reboot.  don't bother saving your artefacts.
From: Geoffrey Summerhayes
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <PfJle.10087$dZ5.908126@news20.bellglobal.com>
"Pascal Bourguignon" <···@informatimago.com> wrote in message 
···················@thalassa.informatimago.com...
> "Geoffrey Summerhayes" <·············@hotmail.com> writes:
>
>>
>> http://online-judge.uva.es/p/v2/202.html
>>
>> Unfortunately, they only allow C, C++, Java, and Pascal
>> solutions to be submitted.
>>
>> Anyone have a solution that doesn't have an O(denominator)
>> space requirement? It's a *bit* of a hitch. ;-)
>
> Where's the challenge?
>

*solution snipped*

Challenge? It was probably the "gimme" in that years ACM contest.

But try removing the line, "None of the input integers exceeds 3000."
and make it work for arbitrarily large (bignum) rationals w/o running
into time/space constraints.

--
Geoff 
From: Wade Humeniuk
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <Xoule.23626$tt5.8131@edtnps90>
Geoffrey Summerhayes wrote:

> Oh, come on, give it a try!
> 
> http://online-judge.uva.es/p/v2/202.html
> 
> Unfortunately, they only allow C, C++, Java, and Pascal
> solutions to be submitted.
> 
> Anyone have a solution that doesn't have an O(denominator)
> space requirement? It's a *bit* of a hitch. ;-)
> 

Here is another one,

CL-USER 9 > (print-repeating 1/3)
0.(3)
NIL

CL-USER 10 > (print-repeating 76/25)
3.04(0)
NIL

CL-USER 11 > (print-repeating 1/397 50)
0.(00251889168765743073047858942065491183879093198992...)
NIL

CL-USER 32 > (print-repeating 1/397 120)
0.(002518891687657430730478589420654911838790931989924433249370277078085642317380352644836272040302267)
NIL

CL-USER 12 > (print-repeating 10)
10
NIL

CL-USER 13 > (get-repeating 2/3)
((0 . 2/3) (6 . 20/3))
20/3

CL-USER 14 > (get-repeating 76/25)
((3 . 76/25) (0 . 2/5) (4 . 4) (0 . 0))
0

(defun get-repeating (rational &optional (max-search 99))
   (let* ((result nil)
          (repeating-value
           (loop for numerator = (numerator rational)
                 for denominator = (denominator rational)
                 for i from 0
                 do
                 (cond
                  ((> i max-search) (return nil))
                  ((zerop rational)
                   (push (cons 0 0) result)
                   (return 0))
                  ((>= numerator denominator)
                   (multiple-value-bind (q r)
                       (floor numerator denominator)
                     (push (cons q rational) result)
                     (setf rational (* 10 (/ r denominator)))
                     (when (member rational result :test #'= :key #'cdr)
                       (return rational))))
                  ((< numerator denominator)
                   (push (cons 0 rational) result)
                   (setf rational (* 10 rational)))))))
     (values (nreverse result) repeating-value)))

(defun print-repeating (rational &optional (max-search 99))
   (multiple-value-bind (expansion repeating-value)
       (get-repeating rational max-search)
     (let* ((reverse (reverse expansion))
            (length (length expansion))
            (last-value (caar (reverse expansion))))
       (cond
        ((not repeating-value)
         (format t "~A.(~{~A~^~}...)~%" (caar expansion) (mapcar #'car (cdr expansion))))
        ((and (= 2 length)
              (zerop last-value))
         (format t "~A~%" (caar expansion)))
        ((zerop last-value)
         (pop reverse)
         (setf expansion (reverse reverse))
         (format t "~A." (caar expansion))
         (format t "~{~A~^~}" (mapcar #'car (cdr expansion)))
         (format t "(0)~%"))
        (t
         (format t "~A." (caar expansion))
         (loop for digit in (mapcar #'car (cdr expansion))
               for rational in (mapcar #'cdr (cdr expansion))
               do
               (when (= repeating-value rational)
                 (write-char #\())
               (princ digit))
         (format t ")~%"))))))ER 15 >

Wade
From: Christopher C. Stacy
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <ull6354nj.fsf@news.dtpq.com>
Pascal Bourguignon <···@informatimago.com> writes:
> One may wonders why an accountant wants to use Lisp instead of COBOL,
> but then, we can't regret that Lisp be successfull.

Hey, I've used Lisp for accounting systems.
(Obviously I didn't use floating point, but this issue 
is, er, exactly the same in all mainstream languages.)
From: Greg Menke
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m3psvfnd5t.fsf@athena.pienet>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Pascal Bourguignon <···@informatimago.com> writes:
> > One may wonders why an accountant wants to use Lisp instead of COBOL,
> > but then, we can't regret that Lisp be successfull.
> 
> Hey, I've used Lisp for accounting systems.
> (Obviously I didn't use floating point, but this issue 
> is, er, exactly the same in all mainstream languages.)


I used C and Visual Basic for my last one- I REALLY wish I had Lisp at
the time.  The hacks we had to work out just to make the systems work
are the worst sort of unmaintainable crapola.  I think Lisp would do
just fine, and probably lots better than COBOL for many things
financial.  Remember how difficult it is to express complex record
processing in COBOL?

Gregm
From: Kent M Pitman
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <upsvesoxz.fsf@nhplace.com>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Pascal Bourguignon <···@informatimago.com> writes:
> > One may wonders why an accountant wants to use Lisp instead of COBOL,
> > but then, we can't regret that Lisp be successfull.
> 
> Hey, I've used Lisp for accounting systems.
> (Obviously I didn't use floating point, but this issue 
> is, er, exactly the same in all mainstream languages.)

As long as you don't try to calculate the US national debt.
In that case, you'd want Lisp (since it's got bignums).
From: GP lisper
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117144803.e1bad31baa3b96b2e69a06c3c453f2f6@teranews>
On Thu, 26 May 2005 02:22:54 GMT, <······@nhplace.com> wrote:
> ······@news.dtpq.com (Christopher C. Stacy) writes:
>
>> Hey, I've used Lisp for accounting systems.
>> (Obviously I didn't use floating point, but this issue 
>> is, er, exactly the same in all mainstream languages.)
>
> As long as you don't try to calculate the US national debt.
> In that case, you'd want Lisp (since it's got bignums).

Even the daily trading volume on currencies needs bignums.

Compared to a typical householder, with a mortgage a multiple of
yearly household income, the national debt is a better risk.

It's a multi-trillion dollar world nowdays.  It's a crime that certain
parties only tell part of the story.


-- 
With sufficient thrust, pigs fly fine.
From: Barry Jones
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <4294ca8b$0$6518$8f2e0ebb@news.shared-secrets.com>
Andr� Thieme wrote:

>
> The only real problem are irrational numbers. Only an extremly small
> amount of them (countable infinite) have a pattern that we can compress.
> Anyway, the other irrational numbers are only fantasy, so that does not
> matter.  

Andr�, I'm curious about your last sentence. Why are the others any more 
or less fantasy than the compressible irrationals? Because of our 
representation of them?

Barry
From: Dave Seaman
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d72jsn$ij7$1@mailhub227.itcs.purdue.edu>
On Wed, 25 May 2005 15:21:25 -0400, Barry Jones wrote:
> Andr� Thieme wrote:

>>
>> The only real problem are irrational numbers. Only an extremly small
>> amount of them (countable infinite) have a pattern that we can compress.
>> Anyway, the other irrational numbers are only fantasy, so that does not
>> matter.  

> Andr�, I'm curious about your last sentence. Why are the others any more 
> or less fantasy than the compressible irrationals? Because of our 
> representation of them?

Since there are uncountably many real numbers, but only countably many
character strings that can be used to describe them, it follows that the
vast majority of real numbers (almost all of them) lie beyond our powers
of description.

As I said earlier, we can describe algebraic numbers and some
transcendentals.


-- 
Dave Seaman
Judge Yohn's mistakes revealed in Mumia Abu-Jamal ruling.
<http://www.commoncouragepress.com/index.cfm?action=book&bookid=228>
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d731gj$cfc$1@ulric.tng.de>
Barry Jones schrieb:
> Andr� Thieme wrote:
> 
>>
>> The only real problem are irrational numbers. Only an extremly small
>> amount of them (countable infinite) have a pattern that we can compress.
>> Anyway, the other irrational numbers are only fantasy, so that does not
>> matter.  
> 
> 
> Andr�, I'm curious about your last sentence. Why are the others any more 
> or less fantasy than the compressible irrationals? Because of our 
> representation of them?


Dave Seamn explained it...
The problem with our "representation of them" is: we don't have it.
It is impossible to represent them in any way. There is no formular,
no trick no something that we could use to represent most irrational
numbers. We only know very few of them, like Pi, (sqrt 2) or some that
I can describe in other ways like:
12.121122111222111122221111122222 etc..
For these we can give algorithms (the compression) to reproduce many
decimal places. We only know they exist (in a mathematical world),
but can't represent them.


Another reason for calling them fantasy, and this is true for all
irrational numbers that they are only a thought in a mathematical world.
At least I don't believe there is anything in this universe whose
measures have to do with an irrational number. Whenever I look at a
circle I only think it is one. In fact it is an object that looks like
one wihthout beeing one.


Andr�
-- 
From: Barry Jones
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <42953042$0$6542$8f2e0ebb@news.shared-secrets.com>
Andr� Thieme wrote:

> Barry Jones schrieb:
>
>> Andr� Thieme wrote:
>>
>>>
>>> The only real problem are irrational numbers. Only an extremly small
>>> amount of them (countable infinite) have a pattern that we can 
>>> compress.
>>> Anyway, the other irrational numbers are only fantasy, so that does not
>>> matter.  
>>
>>
>>
>> Andr�, I'm curious about your last sentence. Why are the others any 
>> more or less fantasy than the compressible irrationals? Because of 
>> our representation of them?
>
>
>
> Dave Seamn explained it...
> The problem with our "representation of them" is: we don't have it.
> It is impossible to represent them in any way. There is no formular,
> no trick no something that we could use to represent most irrational
> numbers. We only know very few of them, like Pi, (sqrt 2) or some that
> I can describe in other ways like:
> 12.121122111222111122221111122222 etc..
> For these we can give algorithms (the compression) to reproduce many
> decimal places. We only know they exist (in a mathematical world),
> but can't represent them.
>
>
> Another reason for calling them fantasy, and this is true for all
> irrational numbers that they are only a thought in a mathematical world.
> At least I don't believe there is anything in this universe whose
> measures have to do with an irrational number. Whenever I look at a
> circle I only think it is one. In fact it is an object that looks like
> one wihthout beeing one.
>
>
> Andr�

Andr� and Dave, thanks for the response. I understand your point now.

As an electrical engineer from the analog days, I was always aware that 
my representation falls short of the things it describes. It was 
abundantly clear to those of us who grew up with slide rules where three 
digits of accuracy was part of our design constraints. From a circuit's 
behavior however, I can infer that those fantasy numbers must indeed 
exist. Otherwise di/dt becomes a bit of a problem, no?

There's still something wonderful about analog scopes, analog meters, 
and analog computers. I think the attraction is that these devices do 
not digitize real world values. It always feels closer to reality.

I know that a smooth curve on an oscilloscope represents something 
that's passing through all those intermediate values, even the ones we 
don't care to write about explicitly. I can expand a portion of a 
waveform as far as my oscilloscope can go, and it's still smooth. If I 
want to go further, I get a better scope. :)

An analog meter lets you see the rate of change of a signal by watching 
the way the needle moves. Try that with a digital meter.

I've been in the digital world for the last twenty years or so, and I'm 
starting to see things through rasterized glasses too. But I remember 
the analog world . . . life exists beyond computers. . . I think!?!?

Barry
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <2005052523564216807%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-05-25 22:37:16 -0400, Barry Jones <········@acm.org> said:

> I know that a smooth curve on an oscilloscope represents something 
> that's passing through all those intermediate values, even the ones we 
> don't care to write about explicitly. I can expand a portion of a 
> waveform as far as my oscilloscope can go, and it's still smooth. If I 
> want to go further, I get a better scope. :)

Does it? Isn't charge/voltage quantized? Doesn't this mean that the 
"smooth curve" actually takes on discrete values separated by very 
small amounts? Aren't physicists talking about the quantization of 
space-time itself now too? In other words, is there any physical 
quantity which is actually continuous at the quantum level? Not a 
rhetorical question - I'm asking as my knowlege of physics is scant and 
antiquated.
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87psveipx4.fsf@thalassa.informatimago.com>
Barry Jones <········@acm.org> writes:
> There's still something wonderful about analog scopes, analog meters,
> and analog computers. I think the attraction is that these devices do
> not digitize real world values. It always feels closer to reality.
>
> I know that a smooth curve on an oscilloscope represents something
> that's passing through all those intermediate values, even the ones we
> don't care to write about explicitly. I can expand a portion of a
> waveform as far as my oscilloscope can go, and it's still smooth. If I
> want to go further, I get a better scope. :)
>
> An analog meter lets you see the rate of change of a signal by
> watching the way the needle moves. Try that with a digital meter.
>
> I've been in the digital world for the last twenty years or so, and
> I'm starting to see things through rasterized glasses too. But I
> remember the analog world . . . life exists beyond computers. . . I
> think!?!?

Good point but physicists seem to tell us that the world is indeed
discreet (Plank time, Plank distance, etc) therefore double floating
point rationals are enough (in the exponent range) to enumerate
physical world things, and normal ratio with 40-50 digits are enough
precision if you know an exact value, which you cannot for a precise
time or a precise speed anyway...

Instruments you can find in a normal house give 2 or 3 significant
digits.  Laboratory instruments may give perhaps 6 significant digits.
You need big budgets with DOD founding to get 8 or 10 digits.  Who
needs floating points with more than 20 significant digits???

And don't be misled, very little physical constants we know with more
than 4 digits, and  most of the time they are _defined_, like c which
is _defined_ to be exactly 299792458 m/s.  And note how it's an
integer number, not a real one ;-).

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

The world will now reboot.  don't bother saving your artefacts.
From: Greg Menke
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m3k6lmo99l.fsf@athena.pienet>
Pascal Bourguignon <···@informatimago.com> writes:

> Instruments you can find in a normal house give 2 or 3 significant
> digits.  Laboratory instruments may give perhaps 6 significant digits.
> You need big budgets with DOD founding to get 8 or 10 digits.  Who
> needs floating points with more than 20 significant digits???

Intermediate terms in calculations may well need that many or more.

Gregm
From: Rob Warnock
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <Db-dnXXqLMpVDwjfRVn-qQ@speakeasy.net>
Pascal Bourguignon  <···@informatimago.com> wrote:
+---------------
| And don't be misled, very little physical constants we know with more
| than 4 digits, and  most of the time they are _defined_, like c which
| is _defined_ to be exactly 299792458 m/s.  And note how it's an
| integer number, not a real one ;-).
+---------------

Well, actually, as of 1983 the meter is *defined* to be:

    <http://en.wikipedia.org/wiki/Meter>
    ...the length of the path travelled by light in an absolute vacuum
    during a time interval of exactly 1/299,792,458 of a second.

So the meter is now a derived unit defined in terms of the speed
of light and the second, rather than a basic unit of its own.[1]

And as of 1967, the second is *defined* to be as follows:

    <http://en.wikipedia.org/wiki/Second>
    [The second] is defined as the duration of 9,192,631,770 periods
    of the radiation corresponding to the transition between the two
    hyperfine levels of the ground state of the caesium-133 atom at
    zero kelvins.
    ...
    The ground state is defined at zero magnetic field. ...


-Rob

[1] Between 1960 and 1983 the meter was defined to be "1650763.73
    wavelengths in vacuum of the radiation corresponding to the
    transition between levels 2p10 and 5d5 of the krypton-86 atom"
    (the orange-red emission line) in a vacuum.

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Geoffrey Summerhayes
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <_aole.17882$Ot6.1215618@news20.bellglobal.com>
"Pascal Bourguignon" <···@informatimago.com> wrote in message 
···················@thalassa.informatimago.com...
> Barry Jones <········@acm.org> writes:

*snip*

>> An analog meter lets you see the rate of change of a signal by
>> watching the way the needle moves. Try that with a digital meter.
>>
>> I've been in the digital world for the last twenty years or so, and
>> I'm starting to see things through rasterized glasses too. But I
>> remember the analog world . . . life exists beyond computers. . . I
>> think!?!?
>
> Good point but physicists seem to tell us that the world is indeed
> discreet (Plank time, Plank distance, etc) therefore double floating
> point rationals are enough (in the exponent range) to enumerate
> physical world things, and normal ratio with 40-50 digits are enough
> precision if you know an exact value, which you cannot for a precise
> time or a precise speed anyway...
>

Well, I'm starting to feel if a physicist came across a locked
door with no keyhole and told someone, later he would be accused
of assuming there was NOTHING(tm) on the other side.

--
Geoff
From: GP lisper
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117144805.2bba42d004431a0c1345d140eb50141a@teranews>
On Thu, 26 May 2005 06:15:03 +0200, <···@informatimago.com> wrote:
>
> Good point but physicists seem to tell us that the world is indeed
> discreet (Plank time, Plank distance, etc) therefore double floating
> point rationals are enough (in the exponent range) to enumerate
> physical world things, and normal ratio with 40-50 digits are enough
> precision if you know an exact value, which you cannot for a precise
> time or a precise speed anyway...

Ah, not so.  I was present when Kip Thorne showed how to measure a
simple harmonic oscillator exactly.  Complicated trick, but the
audience at Cal Tech did not disagree.


-- 
With sufficient thrust, pigs fly fine.
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d74dlu$ikt$1@ulric.tng.de>
Barry Jones schrieb:
> Andr� Thieme wrote:
> 
>> Barry Jones schrieb:
>>
>>> Andr� Thieme wrote:
>>>
>>>>
>>>> The only real problem are irrational numbers. Only an extremly small
>>>> amount of them (countable infinite) have a pattern that we can 
>>>> compress.
>>>> Anyway, the other irrational numbers are only fantasy, so that does not
>>>> matter.  
>>>
>>>
>>>
>>>
>>> Andr�, I'm curious about your last sentence. Why are the others any 
>>> more or less fantasy than the compressible irrationals? Because of 
>>> our representation of them?
>>
>>
>>
>>
>> Dave Seamn explained it...
>> The problem with our "representation of them" is: we don't have it.
>> It is impossible to represent them in any way. There is no formular,
>> no trick no something that we could use to represent most irrational
>> numbers. We only know very few of them, like Pi, (sqrt 2) or some that
>> I can describe in other ways like:
>> 12.121122111222111122221111122222 etc..
>> For these we can give algorithms (the compression) to reproduce many
>> decimal places. We only know they exist (in a mathematical world),
>> but can't represent them.
>>
>>
>> Another reason for calling them fantasy, and this is true for all
>> irrational numbers that they are only a thought in a mathematical world.
>> At least I don't believe there is anything in this universe whose
>> measures have to do with an irrational number. Whenever I look at a
>> circle I only think it is one. In fact it is an object that looks like
>> one wihthout beeing one.
>>
>>
>> Andr�
> 
> 
> Andr� and Dave, thanks for the response. I understand your point now.
> 
> As an electrical engineer from the analog days, I was always aware that 
> my representation falls short of the things it describes. It was 
> abundantly clear to those of us who grew up with slide rules where three 
> digits of accuracy was part of our design constraints. From a circuit's 
> behavior however, I can infer that those fantasy numbers must indeed 
> exist. Otherwise di/dt becomes a bit of a problem, no?
> 
> There's still something wonderful about analog scopes, analog meters, 
> and analog computers. I think the attraction is that these devices do 
> not digitize real world values. It always feels closer to reality.
> 
> I know that a smooth curve on an oscilloscope represents something 
> that's passing through all those intermediate values, even the ones we 
> don't care to write about explicitly. I can expand a portion of a 
> waveform as far as my oscilloscope can go, and it's still smooth. If I 
> want to go further, I get a better scope. :)
> 
> An analog meter lets you see the rate of change of a signal by watching 
> the way the needle moves. Try that with a digital meter.
> 
> I've been in the digital world for the last twenty years or so, and I'm 
> starting to see things through rasterized glasses too. But I remember 
> the analog world . . . life exists beyond computers. . . I think!?!?
> 
> Barry


Hello Barry!

Okay, it might be true what you say. Perhaps in the analog part of our
world there exist something like irrational numbers. I can't disprove
that. However, Barry, I don't buy it!
I believe in physics - and quantum physics explains us, that everything
is quantized (others already pointed that part out).
So when you see a representation of irrational numbers in circuits, then
this is only an illusion. The thing is that our minds are limited. When
we have Pi calculated to 10 decimal places for us humans it is /exactly/
the same as Pi calculated to 20 trillian decimal places. For us it is
indistinguishable from the rational number. If you don't believe this
then slightly touch your keyboard so that it could have moved for a
billionth of a meter. If you now see that it moved you are really good ;)
As I said, from seeing circles I could infer that irrational numbers
exist. Even if it looks like a perfect circle, it isn't one on molecular
level.

The funny thing about analogy is: it does not exist. We live in a
digital universe.
Some leading physicists and mathematicians even believe that we are
living only in a simulated world, inside of a computer program. But
okay, now this is really getting philosophical ;)


Andr�
-- 
From: Thomas A. Russ
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <ymipsvdvooc.fsf@sevak.isi.edu>
Andr� Thieme wrote:

> Hello Barry!
> 
> Okay, it might be true what you say. Perhaps in the analog part of our
> world there exist something like irrational numbers. I can't disprove
> that. However, Barry, I don't buy it!

Well, I would say that there are a fair number of irrational numbers in
the analog world.   Pi and the square root of 2 for starters.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <2005052616002716807%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-05-26 14:15:47 -0400, ···@sevak.isi.edu (Thomas A. Russ) said:

> Well, I would say that there are a fair number of irrational numbers in
> the analog world.   Pi and the square root of 2 for starters.

I believe you're missing the point. pi as defined relies on the 
existence of *perfect* circles. No perfect circles in the real world 
(and the quantum nature of the physical world means all "circles" are 
actually slightly bumpy) means no pi.

Similarly, square roots are a mathematical construct. Basically this is 
an argument between quantum mechanics and mathematical platonism. 
Pysicists hold that the platonic view of the world is not borne out by 
physical experimentation. If all physical quantities are discrete, not 
continuous, then there are no real world irrational quantities - every 
real world quantity can be reduced to an integer number of the 
fundamental unit for that quantity (charge, length, time, etc.).
From: ··············@hotmail.com
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117141237.477112.94170@g43g2000cwa.googlegroups.com>
Raffael Cavallaro wrote:
> On 2005-05-26 14:15:47 -0400, ···@sevak.isi.edu (Thomas A. Russ) said:
>
> > Well, I would say that there are a fair number of irrational numbers in
> > the analog world.   Pi and the square root of 2 for starters.
>
> I believe you're missing the point. pi as defined relies on the
> existence of *perfect* circles. No perfect circles in the real world
> (and the quantum nature of the physical world means all "circles" are
> actually slightly bumpy) means no pi.
>
> Similarly, square roots are a mathematical construct. Basically this is
> an argument between quantum mechanics and mathematical platonism.
> Pysicists hold that the platonic view of the world is not borne out by
> physical experimentation. If all physical quantities are discrete, not
> continuous, then there are no real world irrational quantities - every
> real world quantity can be reduced to an integer number of the
> fundamental unit for that quantity (charge, length, time, etc.).

This discussion is veering off into amateur metaphysics, but that won't
stop me from responding with my own. :-)

"Physical quantities" includes all sorts of things that are not
quantized under  the most widely accepted theories of physics. Electric
and magnetic fields, for instance, have a well-developed quantum
theory, but the resulting fields are still continuous in general. [In
trendy-popularized-science-land, Wolfram, of course, believes that
there is some way an underlying discrete phenomenon could explain all
this, but I believe he is simply hand-waving, and no practical method
for describing reality can come of his ideas, much less an experimental
proof of such discreteness.]

Pi does not depend on a physical circle for its definition. It also
arises from such things as phases of wave functions. Saying pi doesn't
exist physically is sort of like saying -1 doesn't exist physically.
cos(pi), e^pi*i, and all that. Many things are hard to "point at" but
still exist in physical theories.

I remember reading long ago in one of Asimov's popularized science
essays about a person who objected to the reality of imaginary numbers.
The response was, "well, how about fractions? are those real?" "But, of
course!" "Well, then, why don't you show me what half a piece of chalk
is." The skeptic breaks a piece of chalk in two, and hands it back.
"Well, neither of those is 1/2 a piece of chalk; both are just
*smaller* pieces of chalk. If you aren't even clear on fractions, how
can I possibly explain imaginary numbers to you?"

Analyzing anything in the physical world while studiously restricting
one's calculations to be completely contained within the field of
rationals is pretty much impossible. You can't use trig functions,
exponentials, or logarithms. Calculus is completely out of bounds.

Even the mathematics of prime numbers, which seems like the most
discrete kind of mathematics, is intimately connected to the Riemann
zeta function, which has pi and continuity written all over it.

Of course, in some sense, all physical theories are just as imaginary
as pure mathematics, just mental constructs we humans amuse ourselves
with; theories do not determine what the world *is*, just how we
describe it. Nonetheless, quantum physics by no means allows us to
escape continuous mathematics.
From: Barry Jones
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <429648cf$0$6512$8f2e0ebb@news.shared-secrets.com>
··············@hotmail.com wrote:

>Raffael Cavallaro wrote:
>  
>
>>On 2005-05-26 14:15:47 -0400, ···@sevak.isi.edu (Thomas A. Russ) said:
>>
>>    
>>
>>>Well, I would say that there are a fair number of irrational numbers in
>>>the analog world.   Pi and the square root of 2 for starters.
>>>      
>>>
>>I believe you're missing the point. pi as defined relies on the
>>existence of *perfect* circles. No perfect circles in the real world
>>(and the quantum nature of the physical world means all "circles" are
>>actually slightly bumpy) means no pi.
>>
>>Similarly, square roots are a mathematical construct. Basically this is
>>an argument between quantum mechanics and mathematical platonism.
>>Pysicists hold that the platonic view of the world is not borne out by
>>physical experimentation. If all physical quantities are discrete, not
>>continuous, then there are no real world irrational quantities - every
>>real world quantity can be reduced to an integer number of the
>>fundamental unit for that quantity (charge, length, time, etc.).
>>    
>>
>
>This discussion is veering off into amateur metaphysics, but that won't
>stop me from responding with my own. :-)
>
>"Physical quantities" includes all sorts of things that are not
>quantized under  the most widely accepted theories of physics. Electric
>and magnetic fields, for instance, have a well-developed quantum
>theory, but the resulting fields are still continuous in general. [In
>trendy-popularized-science-land, Wolfram, of course, believes that
>there is some way an underlying discrete phenomenon could explain all
>this, but I believe he is simply hand-waving, and no practical method
>for describing reality can come of his ideas, much less an experimental
>proof of such discreteness.]
>
>Pi does not depend on a physical circle for its definition. It also
>arises from such things as phases of wave functions. Saying pi doesn't
>exist physically is sort of like saying -1 doesn't exist physically.
>cos(pi), e^pi*i, and all that. Many things are hard to "point at" but
>still exist in physical theories.
>
>I remember reading long ago in one of Asimov's popularized science
>essays about a person who objected to the reality of imaginary numbers.
>The response was, "well, how about fractions? are those real?" "But, of
>course!" "Well, then, why don't you show me what half a piece of chalk
>is." The skeptic breaks a piece of chalk in two, and hands it back.
>"Well, neither of those is 1/2 a piece of chalk; both are just
>*smaller* pieces of chalk. If you aren't even clear on fractions, how
>can I possibly explain imaginary numbers to you?"
>
>Analyzing anything in the physical world while studiously restricting
>one's calculations to be completely contained within the field of
>rationals is pretty much impossible. You can't use trig functions,
>exponentials, or logarithms. Calculus is completely out of bounds.
>
>Even the mathematics of prime numbers, which seems like the most
>discrete kind of mathematics, is intimately connected to the Riemann
>zeta function, which has pi and continuity written all over it.
>
>Of course, in some sense, all physical theories are just as imaginary
>as pure mathematics, just mental constructs we humans amuse ourselves
>with; theories do not determine what the world *is*, just how we
>describe it. Nonetheless, quantum physics by no means allows us to
>escape continuous mathematics.
>  
>
That's why I brought up the issue of di/dt, as in v=C di/dt. Calculus 
falls apart without continuity.

It's interesting that digital electronics deals with discontinuities as 
a matter of course, but when things start to fail, your attention seems 
to be drawn to those curved corners of the square wave signals. They are 
caused by analog issues like stray capacitance, and bandwidth, and put 
you right back in the analog world.

As long as we're opining about quantum physics, I wonder if we look hard 
enough, whether we might find some "rounded corners" there as well. :^) 
The full story is not yet written. When an electron goes from one 
discrete energy state to another, is there no room within a quantum 
level for some small variation, implying an analog world beneath the 
quantum one? Maybe we're just not looking close enough. Even Heisenberg 
seems to be telling us, "you never know."

Sorry to have hijacked the floating point representation issue. Funny 
how that one word, "fantasy" set me off.

Barry
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <2005052621325375249%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-05-26 17:00:37 -0400, ···············@hotmail.com" 
<············@gmail.com> said:

> Of course, in some sense, all physical theories are just as imaginary
> as pure mathematics, just mental constructs we humans amuse ourselves
> with; theories do not determine what the world *is*, just how we
> describe it. Nonetheless, quantum physics by no means allows us to
> escape continuous mathematics.

Escape, no - no one was ever saying that quantum physics somehow 
invalidates continuous mathematics. The discussion was about whether 
such mathematical constructs have any real world equivalents. If every 
physical quantity is discrete then the constructs of continuous 
mathematics, such as irrational numbers, are purely ideal -  they have 
no counterpart in reality.

On the other hand, if certain physical quantities are truly continuous, 
then irrational numbers (to take the example that started all this) are 
not merely mental constructs, but have a concrete equivalent in the 
measurable physical world.

I think those of us playing devil's advocate here were merely pointing 
out that what appears at first glance to be continuous (e.g., an 
oscilloscope trace) may in fact be discrete at very small scale.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2mzqgx2w7.fsf@genera.local>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-dot-mac.com> writes:

> If every physical quantity is discrete then the constructs of
> continuous mathematics, such as irrational numbers, are purely ideal
> - they have no counterpart in reality.

Consider that:

- Rationals are countably infinite; irrationals are uncountably
  infinite. (ie, there are loads more irrationals than rationals),

- Beteween any two distinct rationals there is an infinite number or
  other rationals and and infinite number of irrationals,

- Between any two distinct irrationals there is an infinite number of
  other irrationals and an infinite number of rationals.

Do you still stand by your statement ?
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <2005053101163175249%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-05-27 08:35:36 -0400, Jacek Generowicz <················@cern.ch> said:

> 
>> If every physical quantity is discrete then the constructs of
>> continuous mathematics, such as irrational numbers, are purely ideal
>> - they have no counterpart in reality.
> 
> Consider that:
> 
> - Rationals are countably infinite; irrationals are uncountably
>   infinite. (ie, there are loads more irrationals than rationals),
> 
> - Beteween any two distinct rationals there is an infinite number or
>   other rationals and and infinite number of irrationals,
> 
> - Between any two distinct irrationals there is an infinite number of
>   other irrationals and an infinite number of rationals.
> 
> Do you still stand by your statement ?

Yes. You confuse analogs with enumeration. To say that a completely 
discrete physical world has an analog of rational numbers is not to say 
that such a universe has an analog for *every* possible rational number.

The important point is that such a completely discrete universe would 
have no analog at all for *any* of the irrational numbers because by 
definition none of the irrationals can be represented by a ratio of 
integer quantities, and the whole universe would be composed solely of 
integer (discrete) quantities.

For example someone in this thread brought up the possible irrational 
number analog of a square with sides of length 1 meter whose diagonal 
would therefore be radical-2 meters in length. But if space-time were 
quantized the length of such a diagonal would not be radical-2 meters 
but some integer multiple of the fundamental unit of space-time 
distance which was really, really, really close to radical-2 meters.

In such a world, irrational numbers would be one of many concepts which 
some people have agreed they may discuss meaningfully in certain ways 
even though there is no physical world analogy for them.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2oeaqcz2x.fsf@genera.local>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-dot-mac.com> writes:

> a square with sides of length 1 meter whose diagonal would therefore
> be radical-2 meters in length. But if space-time were quantized the
> length of such a diagonal would not be radical-2 meters but some
> integer multiple of the fundamental unit of space-time distance
> which was really, really, really close to radical-2 meters.

With a Euclidean metric, this is clearly false. The diagonal has
length (* (sqrt 2) N fundamental-unit) [where N is an integer]. Try it
with a piece of graph paper to represent your quantized space.

You could contrive some metric that avoids the irrationals, but I
doubt that it would bear any meaningful relation to the real world.
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87r7fm9tit.fsf@thalassa.informatimago.com>
Jacek Generowicz <················@cern.ch> writes:

> Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-dot-mac.com> writes:
>
>> a square with sides of length 1 meter whose diagonal would therefore
>> be radical-2 meters in length. But if space-time were quantized the
>> length of such a diagonal would not be radical-2 meters but some
>> integer multiple of the fundamental unit of space-time distance
>> which was really, really, really close to radical-2 meters.
>
> With a Euclidean metric, this is clearly false. The diagonal has
> length (* (sqrt 2) N fundamental-unit) [where N is an integer]. Try it
> with a piece of graph paper to represent your quantized space.

In a quantized space, all distances are integers. 
There are no real (sqrt 2):

-------- ----------------------------------------------------------------
  side                               diagonal
-------- ----------------------------------------------------------------
  1.0L+0                                                                1
  1.0L+3                                                             1414
  1.0L+6                                                          1414214
  1.0L+9                                                       1414213562
 1.0L+12                                                    1414213562373
 1.0L+15                                                 1414213562373095
 1.0L+18                                              1414213562373095049
 1.0L+21                                           1414213562373095048704
 1.0L+24                                        1414213562373095048740864
 1.0L+27                                     1414213562373095048702066688
 1.0L+30                                  1414213562373095048768638681088
 1.0L+33                               1414213562373095048785131355504640
 1.0L+36                            1414213562373095048771620556622528512
 1.0L+39                         1414213562373095048802749437246913380352
 1.0L+42                      1414213562373095048834625411006283485544448
 1.0L+45                   1414213562373095048747582751994030184965603328
 1.0L+48                1414213562373095048793386533447589255136870924288
 1.0L+51             1414213562373095048808598340650328007954831362752512
 1.0L+54          1414213562373095048846242492874705508261677461139357696
 1.0L+57       1414213562373095048785762619066491836044554239896602017792
 1.0L+60    1414213562373095048781679230663440574482993744607420799254528
-------- ----------------------------------------------------------------


> You could contrive some metric that avoids the irrationals, but I
> doubt that it would bear any meaningful relation to the real world.

Note that (sqrt 2) is just a notation for the asymptotic limit of a
serie.  In a discreet universe, you would not need to theorize limits
to the infinity and it would not even matter if the serie would  not
converge, as long as it would go (and stay) down to the needed
precision.  Then we could just take the value S_omega of the serie
(which would be a rational), and perhaps S_omega+1 /= S_omega, but it
would not matter since the diagonal computed would be the same (to the
unit).

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The mighty hunter
Returns with gifts of plump birds,
Your foot just squashed one.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2ekbmcf9r.fsf@genera.local>
Pascal Bourguignon <···@informatimago.com> writes:

>   side                               diagonal

[...]

>  1.0L+60    1414213562373095048781679230663440574482993744607420799254528

? (let ((it 1414213562373095048781679230663440574482993744607420799254528))
	  (* it it))
1999999999999999999943404605701331440810823069584306308214678576155020432604012318870704085237937802712673603320528502784

Close, but no cigar.

> (sqrt 2) is just a notation for the asymptotic limit of a
> serie.

(sqrt 2) is just a notation for the number X which satisfies

(= 2 (* X X))

I am sorry, gentlemen, this is getting way too silly for me: there are
too many "cheeses flying around people's heads" (to quote Andre out of
context). 
From: William Bland
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <pan.2005.05.26.20.08.40.689333@abstractnonsense.com>
On Thu, 26 May 2005 16:00:27 -0400, Raffael Cavallaro wrote:

> Pysicists hold that the platonic view of the world is not borne out by 
> physical experimentation.

I'm not aware of any experiments that have shown quantisation of
space-time - do you know of any?  The most often proposed scale of
quantisation is the Planck length, and I'm not aware of any experiments
that operate on that scale so far.

Best wishes,
		Bill.
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d75mnn$mtg$1@ulric.tng.de>
William Bland schrieb:
> On Thu, 26 May 2005 16:00:27 -0400, Raffael Cavallaro wrote:
> 
> 
>>Pysicists hold that the platonic view of the world is not borne out by 
>>physical experimentation.
> 
> 
> I'm not aware of any experiments that have shown quantisation of
> space-time - do you know of any?  The most often proposed scale of
> quantisation is the Planck length, and I'm not aware of any experiments
> that operate on that scale so far.

I am definately no expert on that issue. It might not always be a good
source, but Wikipedia says:
"Loop quantum gravity, string theory, and black hole thermodynamics all
predict a quantized spacetime with agreement on the order of magnitude.
Loop quantum gravity even makes precise predictions about the geometry
of spacetime at the Planck scale."
( http://en.wikipedia.org/wiki/Spacetime )


Andr�
-- 
From: Mikko Heikelä
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <Pine.OSF.4.61.0505262324520.365863@kosh.hut.fi>
On Thu, 26 May 2005, Raffael Cavallaro wrote:

> On 2005-05-26 14:15:47 -0400, ···@sevak.isi.edu (Thomas A. Russ) said:
>
> Similarly, square roots are a mathematical construct. Basically this 
> is an argument between quantum mechanics and mathematical platonism. 
> Pysicists hold that the platonic view of the world is not borne out 
> by physical experimentation. If all physical quantities are 
> discrete, not continuous, then there are no real world irrational 
> quantities - every real world quantity can be reduced to an integer 
> number of the fundamental unit for that quantity (charge, length, 
> time, etc.).

Could you (or someone else expressing similar views in this 
discussion) explain how do we know that length and time are quantized? 
Or, if this is an argument by authority, point us to the authority 
i.e. an article with the aforementioned explanation...

-Mikko
From: ··············@hotmail.com
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117146549.794599.154080@f14g2000cwb.googlegroups.com>
Mikko Heikelä wrote:
> On Thu, 26 May 2005, Raffael Cavallaro wrote:
>
> > On 2005-05-26 14:15:47 -0400, ···@sevak.isi.edu (Thomas A. Russ) said:
> >
> > Similarly, square roots are a mathematical construct. Basically this
> > is an argument between quantum mechanics and mathematical platonism.
> > Pysicists hold that the platonic view of the world is not borne out
> > by physical experimentation. If all physical quantities are
> > discrete, not continuous, then there are no real world irrational
> > quantities - every real world quantity can be reduced to an integer
> > number of the fundamental unit for that quantity (charge, length,
> > time, etc.).
>
> Could you (or someone else expressing similar views in this
> discussion) explain how do we know that length and time are quantized?
> Or, if this is an argument by authority, point us to the authority
> i.e. an article with the aforementioned explanation...
>
> -Mikko

This is by no means part of experimental physics. However, the general
way this kind of idea arises is in discussion of quantum gravity.
(setting aside electric charge, which seems to come in discrete
amounts, with no experimentally verified explanation, and only
speculations about magnetic monopoles, or strange topological
arguments, that have led pretty much nowhere.)

Now, I will preface this by saying it is not necessary to have a full
quantum theory of gravity to understand how ordinary Earth-strength
gravitational fields affect a laboratory experiment which explores
quantum mechanics. You just add a term in your Hamiltonian representing
the gravitational potential and move on, if you like. However, taken to
extremes (such as in the neighborhood of black holes, or when the
universe was born out of something *much* more compact), the
(continuous) geometry of Einstein's theory of gravity seriously mucks
with the variables of space and time which are used in quantum field
theories. So there is lots of fun to be had by people like Hawking and
Penrose and string theorists, and so forth, trying to replace this
oil-and-water combination with something new.

In any case, the key physical "landmarks" which indicate you are
approaching this extreme, are given by taking the physical contants
which determine relativity (the speed of light, c, and the
gravitational constant G), and combining them with the physical
constant which governs quantum mechanics: Planck's h.

You can combine these constants alone to calculate a length:

http://scienceworld.wolfram.com/physics/PlanckLength.html
sqrt(G h/c^3) = 4 x 10^-35 meter.

or a time

http://scienceworld.wolfram.com/physics/PlanckTime.html
sqrt(G h / c^5) = 1.35 x 10^-43 seconds,

or a mass
http://scienceworld.wolfram.com/physics/PlanckMass.html

sqrt(h c/G) = 5.45 x 10-8 kg.

Planck was actually quite proud that his constant led to these new
fundamental constants.

Now, these are not absolute limits. The Planck mass, especially, is a
perfectly "ordinary" mass, about 10^19 amu, or 10^18 carbon atoms.
Bacteria are much smaller than this, about 10^-15 kg, but don't even
require quantum mechanics to describe their motion.

Instead, we need some context in order to make sense of when they are
extreme. A fundamental subatomic particle of the Planck mass would be
far beyond our experimental experience. The most massive subatomic
particles we have direct evidence of from particle colliders are about
the mass of atoms.

For the Planck mass, we are actually concerned about the opposite
direction: black holes are the main hallmark of gravity; if a black
hole were as *small* as the Planck mass, it would definitely need
quantum mechanics to explain it, while a black hole as massive as a
star works pretty much classically. Such a Planck-mass black hole would
also have a size roughly the Planck length, if it were still classical.

The Planck length *is* extreme, because it is much smaller than we can
measure: much, much smaller than an atomic nucleus or even a proton
(10^-15 meter). We think an electron is point-like, but only because
(crudely) throwing electrons together as hard as we can, the closest we
can make them approach one another
is roughly 10^-18 meters.

This is really what makes the most sense to think about: it might be
that "space" and "time" appear smooth and continuous only because we
are used to probing regions much larger than the Planck length, on time
scales much longer than the Planck time.

Crudely speaking, space and time could actually "look like" some kind
of foam or wooly stuff, with bubbles in in on the Planck length, or
like a chain-link fence with links the size of the planck length, or
whatever. It could even look like graph paper, where God decided to
doodle on the edges of the squares, or a rasterized kind of cellular
automata, to get even more cartoony.

To "giant" things like protons & neutrons, and ordinary atoms, and us,
we would ride over all this chaos, like an ocean liner riding over the
ocean, without worrying about the bubbly foam on the ocean surface, or
even the fact that the water is made of tiny little molecules. We just
see that space time is curved enough overall in our neighborhood that
when we drop things, they fall down.

But all this is really the most extreme sort of speculation. To claim
on this basis that space & time are "discrete" on this scale is quite
glib.
From: GP lisper
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117147502.3e8a469adcb7314c2712e9f76f6e0e85@teranews>
On 26 May 2005 15:29:09 -0700, <············@gmail.com> wrote:
>
> This is by no means part of experimental physics. However, the general
> way this kind of idea arises is in discussion of quantum gravity.
> (setting aside electric charge, which seems to come in discrete
> amounts, with no experimentally verified explanation, and only
> speculations about magnetic monopoles, or strange topological
> arguments, that have led pretty much nowhere.)

This is pretty funny.

There is little question about the discreteness of charge, assuming
monopoles provides solutions to several puzzling problems (and makes
some patch antenna design simple).  Basically the 'observations' in
this area are relatively noise-free....yet people do not think
monopoles exist.


> Now, I will preface this by saying it is not necessary to have a full
> quantum theory of gravity to understand how ordinary Earth-strength
> gravitational fields affect a laboratory experiment which explores
> quantum mechanics. You just add a term in your Hamiltonian representing
> the gravitational potential and move on, if you like. However, taken to
> extremes (such as in the neighborhood of black holes, or when the
> universe was born out of something *much* more compact), the

Here, on the other hand, the observations are few and noisy.  There
are probably more anomalous results (and objects) than understood
objects.  Yet many believe in blackholes, and in the biggest blackhole
of all time.  It must be the cheering section.

The biggest laugh (for me) comes with evaporating black holes, shades
of Bondi, Gold and Hoyle!


> You can combine these constants alone to calculate a length:

This is numerology reborn.


> But all this is really the most extreme sort of speculation. To claim
> on this basis that space & time are "discrete" on this scale is quite
> glib.

Oh, Hi "kenny"


-- 
With sufficient thrust, pigs fly fine.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2r7fsx412.fsf@genera.local>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-dot-mac.com> writes:

> If all physical quantities are discrete, not continuous, then there
> are no real world irrational quantities

Eh? How do you figure that?

If there were any logic to it, it would be an arument against the
existence of real-world RATIONAL numbers (given that the rationals are
countable, while the irrationals are not countable).

But what does discreteness have to do with (ir)rationality anyway?

Here's a discrete set of rational numbers:

   {1, 2, 3, ...}

Here's a discrete set of irrational numbers:

   {pi, 2pi, 3pi, ...}
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <2005053100572316807%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-05-27 08:11:05 -0400, Jacek Generowicz <················@cern.ch> said:

> Eh? How do you figure that?
> 
> If there were any logic to it, it would be an arument against the
> existence of real-world RATIONAL numbers (given that the rationals are
> countable, while the irrationals are not countable).

This is pretty simple so I'm not sure what you're stuck on. If the 
physical world is truly discrete then any real world object consists of 
a finite (though very large) number of individual particles. Thus any 
segment of that object constitutes a portion representable by the 
rational number (number of particles in portion/ number of particles in 
whole). Thus, a real-world  rational number.

Again please be aware that no one is claiming that any of this 
invalidates continous mathematics or the concept of irrational numbers. 
All we're saying is that if the world really is completely discrete 
then irrational numbers have no real-world analog - they are concepts 
*only*. This does not negate the usefulness of the concept - after all 
imaginary numbers are useful and they are named in recognition of the 
fact that they are purely conceptual.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2k6lecyqx.fsf@genera.local>
Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-dot-mac.com> writes:

> This is pretty simple so I'm not sure what you're stuck on.

I'm stuck on it being wrong :-)

> If the physical world is truly discrete then any real world object
> consists of a finite (though very large) number of individual
> particles. Thus any segment of that object constitutes a portion
> representable by the rational number (number of particles in
> portion/ number of particles in whole). Thus, a real-world rational
> number.

Yes, the number of particles in rational. This in no way precludes the
existencte of irrational physical quantities. The "size" of these
particles could well be irrational.
From: Espen Vestre
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <kwpsv6trqm.fsf@merced.netfonds.no>
Jacek Generowicz <················@cern.ch> writes:

> Yes, the number of particles in rational. This in no way precludes the
> existencte of irrational physical quantities. The "size" of these
> particles could well be irrational.

Hmm. In what sense could the "size" be irrational? 
-- 
  (espen)
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d7k8km$ev3$1@ulric.tng.de>
Jacek Generowicz schrieb:
> Raffael Cavallaro <················@pas-d'espam-s'il-vous-plait-dot-mac.com> writes:
> 
> 
>>This is pretty simple so I'm not sure what you're stuck on.
> 
> 
> I'm stuck on it being wrong :-)
> 
> 
>>If the physical world is truly discrete then any real world object
>>consists of a finite (though very large) number of individual
>>particles. Thus any segment of that object constitutes a portion
>>representable by the rational number (number of particles in
>>portion/ number of particles in whole). Thus, a real-world rational
>>number.
> 
> 
> Yes, the number of particles in rational. This in no way precludes the
> existencte of irrational physical quantities. The "size" of these
> particles could well be irrational.

It /might/ be irrational but we can't ever find out about it. We can
only do measure with limited exactness and therefore see everything as
rational. When we can't differentiate between irrational an rational it
means that we can have a rational theory of the universe which is not
worse to an irrational theory (funny wording btw).
It does not make sense to make theories about something that we could
never find out.
Perhaps invisible cheese is flying around the head of every person. It
does not interact with matter or anything else in the universe, but it
is there. But hey, when it does not interact with us (not even like
neutrinos sometimes do) we can also say they are not there, cause it
won't make a difference. I think the theory that a secret invisible
cheese is flying around our heads is equivalent to a theory in which
there are irrational things in our universe.


Andr�
-- 
From: Geoffrey Summerhayes
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <Z1mne.9127$yG4.726935@news20.bellglobal.com>
"Andr� Thieme" <······························@justmail.de> wrote in message 
·················@ulric.tng.de...
> Perhaps invisible cheese is flying around the head of every person. It
> does not interact with matter or anything else in the universe, but it
> is there. But hey, when it does not interact with us (not even like
> neutrinos sometimes do) we can also say they are not there, cause it
> won't make a difference. I think the theory that a secret invisible
> cheese is flying around our heads is equivalent to a theory in which
> there are irrational things in our universe.
>

Not far off from the truth. The universe consists of strands of limburger
cheese vibrating in 11 dimensions. Some of them curled up due to the smell.

That's right, the universe smells really, really bad.

This is known as the 'String cheese' theory. :-)

--
Geoff
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <2005060122343143658%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-06-01 13:14:27 -0400, "Geoffrey Summerhayes" 
<·············@hotmail.com> said:
> The universe consists of strands of limburger
> cheese vibrating in 11 dimensions. Some of them curled up due to the smell...
> This is known as the 'String cheese' theory. :-)

That's the 'Stinky cheese' theory. In the 'String cheese' theory 
they're made from mozzarella. ;^)
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d75mde$mpg$1@ulric.tng.de>
Thomas A. Russ schrieb:
> Andr� Thieme wrote:
> 
> 
>>Hello Barry!
>>
>>Okay, it might be true what you say. Perhaps in the analog part of our
>>world there exist something like irrational numbers. I can't disprove
>>that. However, Barry, I don't buy it!
> 
> 
> Well, I would say that there are a fair number of irrational numbers in
> the analog world.   Pi and the square root of 2 for starters.

As I said, there is (at least in my belief) no analog world. "Analog" is
something we invented to describe "something that gets very small or 
exact". We will find things whose first 20 decimal places are exactly
the ones of an irrational number. We would need better measuring methods
to see that it is *not* an irrational number. Anyway, it doesn't matter
because for any irrational number you tell me, there is a rational
number with which we can represent in our real world this irrational one.

You have never seen Pi, i.e. circles.


Andr�
-- 
From: ··············@hotmail.com
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117153321.943903.213140@g14g2000cwa.googlegroups.com>
André Thieme wrote:
>
> As I said, there is (at least in my belief) no analog world. "Analog" is
> something we invented to describe "something that gets very small or
> exact". We will find things whose first 20 decimal places are exactly
> the ones of an irrational number. We would need better measuring methods
> to see that it is *not* an irrational number. Anyway, it doesn't matter
> because for any irrational number you tell me, there is a rational
> number with which we can represent in our real world this irrational one.
>
> You have never seen Pi, i.e. circles.
>

This is a meaningless statement. Have you ever "seen" a rational
number? A negative number? Have you ever "seen" a cons cell? (to bring
things back into comp.lang.lisp territory). Have you ever "seen" an
algorithm?

We can "see" *material things* (when the lights are on) but also
*depictions* of immaterial ones. Some things cannot be seen directly,
but can be thought about (and rigourously calculated with) nonetheless.


The world might be digital, but we can still do calculus, so apparently
we can make an analog world even if one isn't provided for our
convenience. Giving up calculus because it isn't "real" is almost the
opposite of what should be done: better to give up reality, instead.
:-)

If you need a good irrational number, start up Maxima and use %e or
%pi.
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d75r16$qmf$1@ulric.tng.de>
··············@hotmail.com schrieb:
> 
> Andr� Thieme wrote:
> 
>>As I said, there is (at least in my belief) no analog world. "Analog" is
>>something we invented to describe "something that gets very small or
>>exact". We will find things whose first 20 decimal places are exactly
>>the ones of an irrational number. We would need better measuring methods
>>to see that it is *not* an irrational number. Anyway, it doesn't matter
>>because for any irrational number you tell me, there is a rational
>>number with which we can represent in our real world this irrational one.
>>
>>You have never seen Pi, i.e. circles.
>>
> 
> 
> This is a meaningless statement. Have you ever "seen" a rational
> number? A negative number? Have you ever "seen" a cons cell? (to bring
> things back into comp.lang.lisp territory). Have you ever "seen" an
> algorithm?
> 
> We can "see" *material things* (when the lights are on) but also
> *depictions* of immaterial ones. Some things cannot be seen directly,
> but can be thought about (and rigourously calculated with) nonetheless.
> 
> 
> The world might be digital, but we can still do calculus, so apparently
> we can make an analog world even if one isn't provided for our
> convenience. Giving up calculus because it isn't "real" is almost the
> opposite of what should be done: better to give up reality, instead.
> :-)
> 
> If you need a good irrational number, start up Maxima and use %e or
> %pi.


Oohoo! Stop! I don't want to get rid of calculus! It is fine that we
have it. Although everything in our practical world can (and gets) be
done with rational numbers it is nice to have calculus. Having
irrational numbers makes things much easier, in calculus.
I have no problem with mathematical constructs that only exist in math
and have nothing to do with our real world. Just doing it for doing the
science can be stimulating for several people.

And my "you have never seen Pi" was meant in a practicle way. While
coming back to the mathematical view we can represent Pi. Anyway, even
when talking in the mathematical world there are irrational numbers that
we can't represent (concretely). We can only say "let x be an irrational
number". This then stands for any irrational number, but unlike Pi or e
we can't express them with an algorithm, as this algorithm would have to
be as long as the row of decimal places - but in that fact the best
algorithm would be just enumbering all digits of the number.


Andr�
-- 
From: ··············@hotmail.com
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117169700.301243.13850@g14g2000cwa.googlegroups.com>
André Thieme wrote:
> ··············@hotmail.com schrieb:
> >
> > André Thieme wrote:
> >>You have never seen Pi, i.e. circles.
> >
> > This is a meaningless statement. Have you ever "seen" a rational
> > number? A negative number? Have you ever "seen" a cons cell? (to bring
> > things back into comp.lang.lisp territory). Have you ever "seen" an
> > algorithm?
> >
>
> ... Although everything in our practical world can (and gets) be
> done with rational numbers
>
> And my "you have never seen Pi" was meant in a practicle way. While
> coming back to the mathematical view we can represent Pi. Anyway, even
> when talking in the mathematical world there are irrational numbers that
> we can't represent (concretely). We can only say "let x be an irrational
> number". This then stands for any irrational number, but unlike Pi or e
> we can't express them with an algorithm, as this algorithm would have to
> be as long as the row of decimal places

Remember, you are posting in a Lisp group...we've got other things
which are concrete but not numbers. '(sqrt 2) is a perfectly finite way
to express an irrational.  '(cos 1) is another way to express an
irrational number.

These are all just as "concrete" as any other Lisp expression: they're
made up of cons cells, symbols, and exact numeric atoms. Just don't
pass them to #'eval if you want to keep them exact.

Another potentially clever way is a continued fraction object with a
circular-tailed "infinite" list [quadratic irrationals (/ (+ a (sqrt
b)) c) can be expressed with a periodic continued fraction expansion.
An interesting aside you learn from continued fractions is that the
golden ratio "phi" is, in some sense, the *least* rational number. It's
continued fraction expansion is all ones: 1 + 1/(1 + (1/(1 + ...)));
meaning it "avoids" getting very well approximated by rationals. pi, on
the other hand, has a few larger integers in its continued fraction
expansion (which, however, is not periodic and nice). These large
integers mean that the corresponding truncated "convergent" of the
expansion is *very* close to pi. The classic 355/113 approximation is
an example.  The true value of pi is approximately

? (* pi 113)
354.9999698556466D0, where you might naively have expected something as
bad as 354.5]

Given supporting algorithms, such as in a computer algebra system, we
can do honest-to-god, real money-making practical work with
representations like these, with finite memory in finite time. And it
is as "concrete" as any other representation on a computer: bits in
memory.

Don't miss the big picture; the representations we choose determine
what we can do with our computers. Even physical reality being possibly
discrete on the Planck scale doesn't change what we can do. We don't
have to use physically infinite or continuous representations to
represent infinite and continuous "stuff". 'pi is just as "concrete"
and potentially practical as -1.

I'm surprised no one has chimed in yet with something like:

For god's sake, men, we're LISP PROGRAMMERS. You're all whining like a
bunch of Fortran IV weenies, who think that God gave us only INTEGER
and DOUBLE PRECISION. Pull yourselves together! :-)
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2is14x2ra.fsf@genera.local>
Andr� Thieme <······························@justmail.de> writes:

> Anyway, it doesn't matter because for any irrational number you tell
> me, there is a rational number with which we can represent in our
> real world this irrational one.

... and there are infinitely more irrational numbers which are a
better approximation to the original, that the rational number you
chose.
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d79oki$71i$1@ulric.tng.de>
Jacek Generowicz schrieb:
> Andr� Thieme <······························@justmail.de> writes:
> 
> 
>>Anyway, it doesn't matter because for any irrational number you tell
>>me, there is a rational number with which we can represent in our
>>real world this irrational one.
> 
> 
> .... and there are infinitely more irrational numbers which are a
> better approximation to the original, that the rational number you
> chose.

Yes, in the mathematical world you are right. But I was talking about 
the real world where no irrational numbers exist, so you can't use them 
for an approximation.


Andr�
-- 
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2acmfw45k.fsf@genera.local>
Andr� Thieme <······························@justmail.de> writes:

> I was talking about the real world where no irrational numbers
> exist,

You keep making this assertion, and I keep trying to understand what
leads you to believe it. All your arguments seem to be based
circularly on the assertion itself. Never mind. Let's go back to the
scheduled programme: Lisp.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2fyw8x2lh.fsf@genera.local>
Andr� Thieme <······························@justmail.de> writes:

> Anyway, it doesn't matter because for any irrational number you tell
> me, there is a rational number with which we can represent in our
> real world this irrational one.

... and there are infinitely more irrational numbers which are a
better approximation to the original, that the rational number you
chose.
From: Alain Picard
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87r7ftugch.fsf@memetrics.com>
Andr� Thieme <······························@justmail.de> writes:

> You have never seen Pi, i.e. circles.

I "see" Pi everytime I turn on a light switch,
and think about Maxwell's equations.


p.s. Yes, this is a metaphysical discussion.  You're basically
asking the members of the audience if they're platonists or realists.
From: Jacek Generowicz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <m2y8a0x59k.fsf@genera.local>
Andr� Thieme <······························@justmail.de> writes:

> Perhaps in the analog part of our world there exist something like
> irrational numbers. I can't disprove that. However, Barry, I don't
> buy it!  I believe in physics - and quantum physics explains us,
> that everything is quantized

And what makes you think that everything is quantized on rational
values ?
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d79p42$7pa$1@ulric.tng.de>
Jacek Generowicz schrieb:
> Andr� Thieme <······························@justmail.de> writes:
> 
> 
>>Perhaps in the analog part of our world there exist something like
>>irrational numbers. I can't disprove that. However, Barry, I don't
>>buy it!  I believe in physics - and quantum physics explains us,
>>that everything is quantized
> 
> 
> And what makes you think that everything is quantized on rational
> values ?

Because it seems we have limits in measuring things. It might be 
possible that in the "real universe" the planck length is an irrational 
number. But we can't measure that anymore. In every case using a 
rational number will result in correct results.
Our technical instruments who can measure things only extend the 
abilities of our minds to measure things. It is impossible for humans to 
differentiate between a billionth second and a millionth second. If I 
ask you to wait for that long time you can't do that. For you the time 
is quantized on whatever, a 1/100th of one second.
If you don't have devices to measure the universe infinetly exact you 
won't find irrational numbers.


Andr�
-- 
From: ··············@hotmail.com
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117135004.012404.155950@g47g2000cwa.googlegroups.com>
André Thieme wrote:
>
> Dave Seamn explained it...
> The problem with our "representation of them" is: we don't have it.
> It is impossible to represent them in any way. There is no formular,
> no trick no something that we could use to represent most irrational
> numbers. We only know very few of them, like Pi, (sqrt 2) or some that
> I can describe in other ways like:
> 12.121122111222111122221111122222 etc..
> For these we can give algorithms (the compression) to reproduce many
> decimal places. We only know they exist (in a mathematical world),
> but can't represent them.
>
>
> Another reason for calling them fantasy, and this is true for all
> irrational numbers that they are only a thought in a mathematical world.
> At least I don't believe there is anything in this universe whose
> measures have to do with an irrational number. Whenever I look at a
> circle I only think it is one. In fact it is an object that looks like
> one wihthout beeing one.

This sounds like a psychological argument, which could be quite
variable from one individual to another. Where do you draw the line
between "representations" that are acceptably "real" and
representations that only qualify as "fantasy"?

It is trivial to represent any number of irrationals, and useful ones,
too, as the roots of transcendental or algebraic equations which appear
routinely in finance, engineering, applied math, and science.
Catenaries and compound curves all involve irrational numbers. The
approximation of continuous compounding used by accountants everyday
involves and produces irrational numbers.  Yes, their calculators and
my Lisp implementation use rational approximations, but the formula e^x
hardly seems "fantasy" to me. It is a small step to the kind of
definitions you find in real analysis, with "least upper bounds" and so
forth.
(Incidentally, I don't subscribe to any completely discrete theory of
physical reality; I maintain that physical magnitudes of things like
velocities and electric fields are truly continous. Lattice-type
theories or cellular-automata theories have yet to demonstrate good old
Maxwell's equations and QED are obsolete.)

Then, at the other extreme, one can think the integers to be an
abstruse manifestation of abstract set theory. The number 10^1024 is a
totally imaginary number in our relatively puny physical universe. You
could never pile up or count this many things in reality, but my Lisp
implementation supports this fiction without a hiccup. My psychological
limit for understanding numbers in "reality" is probably in the
thousands or millions somewhere. As in the number of people I can see
in a stadium or number of small screws in an industrial-sized lot on a
loading dock. Beyond that, I am in the "fantasy world" of
pencil-pushing and digits that my computer or calculator spits out at
me. Even negative integers are a bit "fantastic."

The fact remains: no computer language today is capable of supporting
the full range of mathematical abstraction that can be used to solve
even practical problems, not to mention those of pure mathematics.
That's why they still give out Ph.D.'s in technical fields, instead of
just ordering an equivalent piece of software. Yet, most people find
that C, Fortran, Matlab, or Common Lisp provide the basic functionality
needed to write software that is useful in these fields. Mathematica or
"big floats" go only slightly further, but don't seem to be dominating
any practical field for that reason alone.

That's the test: whether you have what it takes to solve tractable
problems.

The philosophical questions that lie in the hidden assumptions of
mathematicians (are irrational numbers real? is set theory real?) are
beyond the reach of computing.
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d75nfq$nkb$1@ulric.tng.de>
··············@hotmail.com schrieb:
> 
> Andr� Thieme wrote:
> 
>>Dave Seamn explained it...
>>The problem with our "representation of them" is: we don't have it.
>>It is impossible to represent them in any way. There is no formular,
>>no trick no something that we could use to represent most irrational
>>numbers. We only know very few of them, like Pi, (sqrt 2) or some that
>>I can describe in other ways like:
>>12.121122111222111122221111122222 etc..
>>For these we can give algorithms (the compression) to reproduce many
>>decimal places. We only know they exist (in a mathematical world),
>>but can't represent them.
>>
>>
>>Another reason for calling them fantasy, and this is true for all
>>irrational numbers that they are only a thought in a mathematical world.
>>At least I don't believe there is anything in this universe whose
>>measures have to do with an irrational number. Whenever I look at a
>>circle I only think it is one. In fact it is an object that looks like
>>one wihthout beeing one.
> 
> 
> This sounds like a psychological argument, which could be quite
> variable from one individual to another. Where do you draw the line
> between "representations" that are acceptably "real" and
> representations that only qualify as "fantasy"?

In our real world it seems that memory is limited, as there are "only"
ca. 10^78 atoms in the universe. That means there exists no bijection
between practically representable numbers and the natural numbers.

But this was not what I was thinking about. The real problem with most
irrational numbers is, that we don't know them and can't ever know them.
Representable irrational numbers are numbers whose decimal places could
be calculated by an algorithm. These are for example Pi, e, (sqrt 2) or
16.121122111222111122221111122222111111222222..


> It is trivial to represent any number of irrationals, and useful ones,
> too, as the roots of transcendental or algebraic equations which appear
> routinely in finance, engineering, applied math, and science.

You can "only" give me countable infinite irrational numbers, but there
exist uncountable infinite irrational numbers.


> Then, at the other extreme, one can think the integers to be an
> abstruse manifestation of abstract set theory. The number 10^1024 is a
> totally imaginary number in our relatively puny physical universe.  You
> could never pile up or count this many things in reality, but my Lisp
> implementation supports this fiction without a hiccup.

Yes, these are representable numbers. You just gave a representation of
this big number, 10^1024. Your problem is that for nearly all irrational
numbers there does not exist a formular. We only know that these numbers
exist in the mathematical world.


Perhaps you see now that I was not talking about any psychological or
philosophical problem here - I just meant that we can only represent
"a few" irrational numbers (countable infinite).


Andr�
-- 
From: GP lisper
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <1117144803.db4347ff3193367454e8aeb52f01823d@teranews>
On Thu, 26 May 2005 01:26:33 +0200, <······························@justmail.de> wrote:
>
> Another reason for calling them fantasy, and this is true for all
> irrational numbers that they are only a thought in a mathematical world.
> At least I don't believe there is anything in this universe whose
> measures have to do with an irrational number. Whenever I look at a
> circle I only think it is one. In fact it is an object that looks like
> one wihthout beeing one.

There are many real objects with "dimensions" based on Pi.  I see
plenty of perfect circles, what I don't see is the ratio between the
radius and circumference.

It's an old story, take a pencil, break it in half and hand it to me.
I see a single object, not half of an object.  Show me zero.


-- 
With sufficient thrust, pigs fly fine.
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d75o3r$o7m$1@ulric.tng.de>
GP lisper schrieb:
> On Thu, 26 May 2005 01:26:33 +0200, <······························@justmail.de> wrote:
> 
>>Another reason for calling them fantasy, and this is true for all
>>irrational numbers that they are only a thought in a mathematical world.
>>At least I don't believe there is anything in this universe whose
>>measures have to do with an irrational number. Whenever I look at a
>>circle I only think it is one. In fact it is an object that looks like
>>one wihthout beeing one.
> 
> 
> There are many real objects with "dimensions" based on Pi.  I see
> plenty of perfect circles, what I don't see is the ratio between the
> radius and circumference.

Where is a perfect circle? It exists in a mathematical world, but not
in reality. Don't forget that the lines with which we draw a circle is
only allowed to have one dimension. Even if you would try to draw a
circle of a row of atoms it would be a 3d object.

The Planck time is 5.391 � 10-44 seconds. If time is continous or
discrete does not matter, we can't differ between them, because we can't
do better measuring. So a theory where time is discrete is equivalent to
one where time is continous which means, we can say time is discrete.
This means between any two events we have to wait 5.391 � 10-44 seconds.
It is not possible to wait a shorter amount of time.


So, give up all hopes that you are living in a "real universe" ;-)
We are all living inside a big simulation and are only computer programs.
I assume that Kenny was programmed in Lisp *g*


Andr�
-- 
From: William Bland
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <pan.2005.05.25.16.49.55.435492@abstractnonsense.com>
On Tue, 24 May 2005 19:00:51 -0700, jtdubs wrote:
> 
> * (+ 0.04 0.01)
> 
> 0.049999997
> 
> Sorry, I just don't see that as acceptable.  It seems to me that by
> default all floating-points should have arbitrary precision

Then you might be interested in this amusing little hack:

http://groups-beta.google.com/group/comp.lang.lisp/msg/468b60f4efd9e660?hl=en

Best wishes,
		Bill.
From: Raffael Cavallaro
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <2005052515404916807%raffaelcavallaro@pasdespamsilvousplaitdotmaccom>
On 2005-05-25 12:49:56 -0400, William Bland 
<·······@abstractnonsense.com> said:

> Then you might be interested in this amusing little hack:
> 
> http://groups-beta.google.com/group/comp.lang.lisp/msg/468b60f4efd9e660?hl=en
> 
> Best wishes,
> 		Bill.


Unfortunately, this breaks 1+ and 1-, ironically making it impossible 
to compile the definition of read-rational after read-rational has been 
loaded as read-rational uses 1-.
From: William Bland
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <pan.2005.05.25.20.23.42.80839@abstractnonsense.com>
On Wed, 25 May 2005 15:40:49 -0400, Raffael Cavallaro wrote:

> On 2005-05-25 12:49:56 -0400, William Bland 
> <·······@abstractnonsense.com> said:
> 
>> Then you might be interested in this amusing little hack:
>> 
>> http://groups-beta.google.com/group/comp.lang.lisp/msg/468b60f4efd9e660?hl=en
>> 
>> Best wishes,
>> 		Bill.
> 
> 
> Unfortunately, this breaks 1+ and 1-, ironically making it impossible 
> to compile the definition of read-rational after read-rational has been 
> loaded as read-rational uses 1-.

Hehe, sure, that's not very nice.  Wouldn't be too hard to fix though, of
course (although it would make it less of a "little hack").

Best wishes,
		Bill.
From: Pascal Bourguignon
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87fywbjfop.fsf@thalassa.informatimago.com>
William Bland <·······@abstractnonsense.com> writes:

> On Tue, 24 May 2005 19:00:51 -0700, jtdubs wrote:
>> 
>> * (+ 0.04 0.01)
>> 
>> 0.049999997
>> 
>> Sorry, I just don't see that as acceptable.  It seems to me that by
>> default all floating-points should have arbitrary precision
>
> Then you might be interested in this amusing little hack:
>
> http://groups-beta.google.com/group/comp.lang.lisp/msg/468b60f4efd9e660?hl=en

But then, he might object that:

[26]> (- (* 4 (atan 1.0)) pi)
-1.5099579909788780896L-7

(Note: I typed this expression with the above hack enabled).


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Kitty like plastic.
Confuses for litter box.
Don't leave tarp around.
From: André Thieme
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <d6vg97$b0b$1@ulric.tng.de>
······@gmail.com schrieb:

> I've often wondered why no languages support infinite-precision
> floating-point.  Apprently, bignums where deemed important enough to
> standardize in Common Lisp, but not bigfloats.  Anyone have an
> explanation for this that they'd like to share with me?

One thing I could think of as a first idea are irrational numbers.
The program could find out if something is rational like (/ 1 3) and
convert it to 1/3. But if you are calculating an irrational number you
anyway have to decide when to stop. And for that purpose the current
solution seems fine.

My intent was to calculate the digits of a given number (it might even
happen destructively for my special intent) without using tons of memory
as write-to-string does. I suspect write-to-string works so fast because
it is implemented in C and gets direct access to the internal structure
of bignums (like fact 20000) - it is probably not saved as a string
under the hood, but as something that is not to hard to convert into one.


Andr�
-- 
From: ······@gmail.com
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <1116951738.394012.204100@g43g2000cwa.googlegroups.com>
André Thieme wrote:
> ······@gmail.com schrieb:
>
> > I've often wondered why no languages support infinite-precision
> > floating-point.  Apprently, bignums where deemed important enough
to
> > standardize in Common Lisp, but not bigfloats.  Anyone have an
> > explanation for this that they'd like to share with me?
>
> One thing I could think of as a first idea are irrational numbers.
> The program could find out if something is rational like (/ 1 3) and
> convert it to 1/3. But if you are calculating an irrational number
you
> anyway have to decide when to stop. And for that purpose the current
> solution seems fine.

Good point.  I wonder if this would really be a problem though.

In what ways can you product a infinitely long number.  I can think of
a few.

1. Type it in.  :-).  That's clearly not feasible.
2. Arrive at it as a result of a calculation.  Now that's better.

But, what sort of calculation can produce a number of infinite length.
I can think of only a few of these, also.

1. An infinitely long calculation

An example would this would be using an infinite progression or
summation to calculate pi or e.  No problem there because it will take
infinite time to get your infinite digits.

2. Division

What sort of numbers when divided produce an infinitely long result.
Infinitely long numbers are one such case, but that's our base case, so
that doesn't count.  Simple numbers that result in a repeating
floating-point is another.  All of these can be expressed as ratios or
rationals, which I would say is their more natural form.  So, given 0.5
/ 1.5 in our system, we can find a common denominator and turn this
into 1/3 because we know 0.5 and 1.5 perfectly accurately.  So, I don't
think these are a problem.

3. Exponentiation with a negative exponent

Square roots and the like.  These are the toughest, I suspect.  Perhaps
these need to be introduced as a valid numeric type also so that (expt
2 -2) results in some sort of #<EXPT 2 ^ -2> value which doesn't get
expanded.  Coercing this to a bigfloat would be an infinitely long
operation, but I can live with that.

That's all I can come up with.  I think this could be made to work, but
it would likely require a more rich number system to build off of.

Thoughts?

Justin Dubs
From: Kent M Pitman
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <ull64ilk7.fsf@nhplace.com>
···@zedat.fu-berlin.de (Stefan Ram) writes:

> ······@gmail.com writes:
> >But, what sort of calculation can produce a number of infinite length.
> 
>   A length is not a property of a number, but of a
>   representation of a number - it depends on the representation
>   system used.

I'm glad someone made this point.

Additionally, even among computed values, it's easy to make numbers longer
than they really should be.

Just because Spock or Data can calculate the number of seconds
remaining before their respective Enterprises blow up to 50 digits
doesn't mean they can speak their concerns out loud in English with
sufficient precision about where they are counting from to make the
exercise of mentioning that number even remotely meaningful.

Few quantities in the world that are known to infinite precision.
Mass, speed, volume, etc. of real objects are usually known only to
finite precision, and computations on them may increase the apparent
precision, but not the actual precision (which usually gets worse as
you combine it more).

Before investing a lot of time in "infinite" precision, I'd personally
recommend developing a serious theory of precision that combines
correctly and discards bits that are not conveying real information.

But also, one reason I personally tend to steer clear of these
discussions is that career mathemeticians have studied the area for a
large number of years.  It seems to me that this is a multidimensional
design space with no canonical optimization point that will satisfy every
need.  It's probably just because of my background in language design,
but I always fear that people are talking about "how to make the language
better" whenever they get involved in a discussion about "what's needed"
or "what's better than what".  I personally always wish these discussions
would turn away from the language and toward the making of personal libraries.
At that point, the only question for the language should be "am I kept from
writing my personal library by something in the language".  And as far
as the "what's better than what" question, that reduces to "why not many
solutions?" and "do the many solutions each really do what they are 
documented to do".

For example, MACSYMA, in MACLISP, pre-Common Lisp, had bigfloats in it
back in the 1970's, but these are not "infinite" precision floats,
they were "arbitrary" (and finite) precision floats.  Bigfloats were
well-known to the designers of Common Lisp, but were not included in
the language.  I think there were both technical and social reasons
for this.  But that doesn't mean such a package wouldn't be a great
to have as a standalone library.

As I recall, David Stoutemeyer (who developed MuMath, a symbolic math
product for a tiny lisp called MuLisp), was working on a
representation of real numbers that would allow some sets of reals to
be represented precisely.  I wasn't in touch with him on an ongoing
basis over the last couple of decades, so I don't know where that work
went, but it would also make a fine library.

I'd also love to see a binary coded decimal (or equivalent) library.
From: Frank Buss
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <syumun31wq4g.6td4eflgm336$.dlg@40tude.net>
Kent M Pitman wrote:

> Before investing a lot of time in "infinite" precision, I'd personally
> recommend developing a serious theory of precision that combines
> correctly and discards bits that are not conveying real information.

you may take a look at "Intervallarithmetik" (I don't know the english word
for it):

http://www2.informatik.uni-wuerzburg.de/forsch/bereiche/Intervallarithmetik.xml?i2statuslang=de

There are even hardware support developed for calculating with theses
floating point intervals.

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Bruce Stephens
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87br70wil4.fsf@cenderis.demon.co.uk>
Frank Buss <··@frank-buss.de> writes:

[...]

> you may take a look at "Intervallarithmetik" (I don't know the
> english word for it):

It's "interval arithmetic".

[...]
From: Frank Buss
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <sxkl6xn57lek.1nf5hluh6va5v$.dlg@40tude.net>
Bruce Stephens wrote:

> It's "interval arithmetic".

thanks, this wasn't too difficult :-) And Google finds some interesting
descriptions for it, for example:

http://csr.uvic.ca/~vanemden/research/ia.html

-- 
Frank Bu�, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: George Neuner
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <jml7915d7jmmo9nof5s1jql57uic3bsop3@4ax.com>
On Tue, 24 May 2005 17:21:37 GMT, Kent M Pitman <······@nhplace.com>
wrote:


>For example, MACSYMA, in MACLISP, pre-Common Lisp, had bigfloats in it
>back in the 1970's, but these are not "infinite" precision floats,
>they were "arbitrary" (and finite) precision floats.  Bigfloats were
>well-known to the designers of Common Lisp, but were not included in
>the language.  I think there were both technical and social reasons
>for this.  But that doesn't mean such a package wouldn't be a great
>to have as a standalone library.
>
>As I recall, David Stoutemeyer (who developed MuMath, a symbolic math
>product for a tiny lisp called MuLisp), was working on a
>representation of real numbers that would allow some sets of reals to
>be represented precisely.  I wasn't in touch with him on an ongoing
>basis over the last couple of decades, so I don't know where that work
>went, but it would also make a fine library.

I recall reading some years ago about representing fractions with a
series of factorials (x/2! + y/3! + ... ).  I don't remember who wrote
the paper (I can't find it now), but the author claimed that an
implementation using this format could represent many more reals than
if it used equivalent sized IEEE formats.

I screwed around with it for a while back when my best computer was a
i286 with 512K and no FPU, but I never came close to getting a real
working implementation.  The software FP libraries included with
TurboC quite handily trounced the best x86 assembler I could write.

George
-- 
for email reply remove "/" from address
From: André Thieme
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <d6vmbs$gak$1@ulric.tng.de>
······@gmail.com schrieb:
> Andr� Thieme wrote:
> 
>>······@gmail.com schrieb:
>>
>>
>>>I've often wondered why no languages support infinite-precision
>>>floating-point.  Apprently, bignums where deemed important enough
> 
> to
> 
>>>standardize in Common Lisp, but not bigfloats.  Anyone have an
>>>explanation for this that they'd like to share with me?
>>
>>One thing I could think of as a first idea are irrational numbers.
>>The program could find out if something is rational like (/ 1 3) and
>>convert it to 1/3. But if you are calculating an irrational number
> 
> you
> 
>>anyway have to decide when to stop. And for that purpose the current
>>solution seems fine.
> 
> 
> Good point.  I wonder if this would really be a problem though.
> 
> In what ways can you product a infinitely long number.  I can think of
> a few.
> 
> 1. Type it in.  :-).  That's clearly not feasible.
> 2. Arrive at it as a result of a calculation.  Now that's better.
> 
> But, what sort of calculation can produce a number of infinite length.
> I can think of only a few of these, also.

Nearly all calculations do that. The funny thing is, that our life works
pretty good without looking at them. Our world works without irrational
numbers. But as there are uncountable irrational numbers and only
countable rational numbers nearly all calculations are about irrational
numbers. We only have luck that we never need them. ;-)


Andr�
-- 
From: André Thieme
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <d6vmie$gcm$1@ulric.tng.de>
Stefan Ram schrieb:
> ······@gmail.com writes:
> 
>>But, what sort of calculation can produce a number of infinite length.
> 
> 
>   A length is not a property of a number, but of a
>   representation of a number - it depends on the representation
>   system used.

Every number in every number system has infinite length. It is only a
convention not to count the Zero.

A property of irrational numbers is, that in every number system they
have infinite length without a period.


Andr�
-- 
From: Edi Weitz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <uzmuk1uuy.fsf@agharta.de>
On Tue, 24 May 2005 17:14:02 +0200, Andr� Thieme <······························@justmail.de> wrote:

> I suspect write-to-string works so fast because it is implemented in
> C and gets direct access to the internal structure of bignums

Which Lisp?  If you're not talking about CLISP I'm pretty sure that
WRITE-TO-STRING (or any other Lisp function) is /not/ implemented in
C.

Cheers,
Edi.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87r7fwbnu8.fsf@qrnik.zagroda>
Edi Weitz <········@agharta.de> writes:

>> I suspect write-to-string works so fast because it is implemented in
>> C and gets direct access to the internal structure of bignums
>
> Which Lisp?  If you're not talking about CLISP I'm pretty sure that
> WRITE-TO-STRING (or any other Lisp function) is /not/ implemented in
> C.

clisp-2.33.2/src/intprint.d which converts an integer to a string is
written in C.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Edi Weitz
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <usm0cbnc7.fsf@agharta.de>
On Tue, 24 May 2005 18:17:19 +0200, Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:

> Edi Weitz <········@agharta.de> writes:
>
>> Which Lisp?  If you're not talking about CLISP I'm pretty sure that
>> WRITE-TO-STRING (or any other Lisp function) is /not/ implemented
>> in C.
>
> clisp-2.33.2/src/intprint.d which converts an integer to a string is
> written in C.

So, what is your point?  I specifically excluded CLISP.  Was my
message too long to be read completely?

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: André Thieme
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d6vlo2$fqr$1@ulric.tng.de>
Edi Weitz schrieb:
> On Tue, 24 May 2005 17:14:02 +0200, Andr� Thieme <······························@justmail.de> wrote:
> 
> 
>>I suspect write-to-string works so fast because it is implemented in
>>C and gets direct access to the internal structure of bignums
> 
> 
> Which Lisp?  If you're not talking about CLISP I'm pretty sure that
> WRITE-TO-STRING (or any other Lisp function) is /not/ implemented in
> C.

I forgot to mention that I am using CLisp here at home.
How fast other Lisps can convert an integer into a string I don't know.
My guess is, if they can do it very fast they use some internal
functions with which they access the internal representation of a bignum.
Otherwise, how would you implement write-to-string?


Andr�
-- 
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87vf58bo25.fsf@qrnik.zagroda>
······@gmail.com writes:

> I've often wondered why no languages support infinite-precision
> floating-point.

What would be sqrt 2, or log 5345, or sin 63526? How many digits
to compute?

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Bruno Haible
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <d7enui$dn0$2@laposte.ilog.fr>
Marcin 'Qrczak' Kowalczyk wrote:
>
>> I've often wondered why no languages support infinite-precision
>> floating-point.
>
> What would be sqrt 2, or log 5345, or sin 63526? How many digits
> to compute?

That's easy: Compute as many digits as the user wants to see.
Similar to the way infinite-length lists are printed in Common Lisp:
A variable *PRINT-LENGTH* specifies how many list elements to print.

Here's a sample interaction with Michael Stoll's "computable real numbers"
package [1]:

 > (setq a (sqrt-r 3))
 +1.73205080756887729353...
 > (setq b (tan-r (/r pi-r 3)))
 +1.73205080756887729353...
 > (setq diff (-r a b))
 +0.00000000000000000000...
 > (setq *print-prec* 70)
 70
 > diff
 +0.0000000000000000000000000000000000000000000000000000000000000000000000...
 > a
 +1.7320508075688772935274463415058723669428052538103806280558069794519330...
 > b
 +1.7320508075688772935274463415058723669428052538103806280558069794519330...

As you can see, the user can set the precision of the result a posteriori,
and have all numbers recompute themselves to the necessary precision. 

             Bruno


[1] http://www.haible.de/bruno/MichaelStoll/reals.html
From: Nicolas Neuss
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <87mzqdhqw9.fsf@ortler.iwr.uni-heidelberg.de>
Bruno Haible <·····@clisp.org> writes:

> [1] http://www.haible.de/bruno/MichaelStoll/reals.html

Wow!  Very nice work.  Thank you for this link.

Nicolas.
From: Ray Dillinger
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <PHUxe.2972$p%3.17875@typhoon.sonic.net>
Bruno Haible wrote:

> As you can see, the user can set the precision of the result a posteriori,
> and have all numbers recompute themselves to the necessary precision. 
> 
>              Bruno

> [1] http://www.haible.de/bruno/MichaelStoll/reals.html

Nice work!  The basic strategem being to represent promises
instead of attempting to represent numbers.  The "leaves"
of the promise are all guaranteed to be easily-representable
numbers because the user had to type them into the program.

Then you actually do the computations (force the promise)
when you know how much precision is desired, and you can
backpropagate requirements for "guard digits" etc back
into the promise!

Yay!  I love it when something does "the right thing" with
numbers.

				Bear
From: Joe Marshall
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <64vrn2ad.fsf@comcast.net>
Ray Dillinger <····@sonic.net> writes:

> Bruno Haible wrote:
>
>> As you can see, the user can set the precision of the result a posteriori,
>> and have all numbers recompute themselves to the necessary
>> precision.              Bruno
>
>> [1] http://www.haible.de/bruno/MichaelStoll/reals.html
>
> Nice work!  The basic strategem being to represent promises
> instead of attempting to represent numbers.  The "leaves"
> of the promise are all guaranteed to be easily-representable
> numbers because the user had to type them into the program.
>
> Then you actually do the computations (force the promise)
> when you know how much precision is desired, and you can
> backpropagate requirements for "guard digits" etc back
> into the promise!
>
> Yay!  I love it when something does "the right thing" with
> numbers.

An alternative approach is to use continued fractions or Moebius
transforms.  Rather than recomputing when you need more precision,
each number will be incrementally extended as the extra precision is
demanded.  You can either ask for the ultimate result to a certain
level of precision, or you can just ask it to keep printing digits
until you are bored.


-- 
~jrm
From: Juliusz Chroboczek
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <7ioe9jzmum.fsf@lanthane.pps.jussieu.fr>
Bruno Haible:

> [1] http://www.haible.de/bruno/MichaelStoll/reals.html

Interesting.  How does <= work on infinite-precision things that
happen to be equal?

                                        Juliusz
From: Joe Marshall
Subject: Re: Infinite precision floating-point
Date: 
Message-ID: <zmt3lgxi.fsf@comcast.net>
Juliusz Chroboczek <···@pps.jussieu.fr> writes:

> Bruno Haible:
>
>> [1] http://www.haible.de/bruno/MichaelStoll/reals.html
>
> Interesting.  How does <= work on infinite-precision things that
> happen to be equal?

If you read the source code, you will see this comment:

;;;; For comparison operations etc. a precision threshold is used. It is
;;;; defined through the dynamic variable *CREAL-TOLERANCE*. Its value should
;;;; be a nonnegative integer n, meaning that numbers are considered equal
;;;; if they differ by at most 2^(-n).

-- 
~jrm
From: André Thieme
Subject: Re: Infinite precision floating-point (was Re: Counting number of digits)
Date: 
Message-ID: <d731js$cfc$2@ulric.tng.de>
······@gmail.com schrieb:
> Andr� Thieme wrote:
> 
>>······@gmail.com schrieb:
>>
>>
>>>I don't know if you will view these as cheating or not but...
>>
>>>* (time (floor (1+ (log (fact 20000) 10))))
>>>
>>>Evaluation took:
>>>  3.069 seconds of real time
>>>  1.504822 seconds of user run time
>>>  1.033656 seconds of system run time
>>>  0 page faults and
>>>  339,358,200 bytes consed.
>>>77338
>>>0.2578125
>>
>>I don't see this solution as cheating. Also for factorials it will
> 
> work.
> 
>>There is only one problem with using it for implementing
> 
> count-digits:
> 
>>for big numbers it will not work that way:
>>CL-USER> (log 999998880 10)
>>8.999999
>>CL-USER> (log 999998881 10)
>>9.0
>>
>>If a big number is close getting a new digit the foating point unit
> 
> can
> 
>>give inaccurate results. Changing that to another type will help for
>>bigger numbers, but not for real big numbers:
>>CL-USER> (coerce 9999999999999999999 'long-float)
>>9.999999999999999999L18
>>CL-USER> (coerce 99999999999999999999 'long-float)
>>1.0L20
>>
>>CL-USER> (SETF (EXT:LONG-FLOAT-DIGITS) 250)
>>250
>>CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 60
>>:initial-element #\9)) 'long-float) 10)))
>>60
>>CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 70
>>:initial-element #\9)) 'long-float) 10)))
>>70
>>CL-USER> (1+ (floor (log (coerce (parse-integer (make-string 80
>>:initial-element #\9)) 'long-float) 10)))
>>81
>>
>>
>>Okay, I have to admit that this would probably not happen in real
> 
> life
> 
>>programs, but at the moment I don't see an easy way how to solve that
>>problem. Our long-floats should have as many decimal places as the
> 
> number
> 
>>has digits.
> 
> 
> I've often wondered why no languages support infinite-precision
> floating-point.  Apprently, bignums where deemed important enough to
> standardize in Common Lisp, but not bigfloats.  Anyone have an
> explanation for this that they'd like to share with me?
> 
> It even seems easy to implement.  Just a cons of bignums.  One for the
> number and one for the exponent.  They'd be twice as slow as bignums,
> but, if you need infinite-precision floating-point, accuracy is
> probably more of a concern than speed...
> 
> Justin Dubs
> 


Btw, it is so trivial to do it correctly, I didn't see it until now ;-)

Instead of
(time (floor (1+ (log (fact 20000) 10))))
we can do
(time (ceiling (log (fact 20000) 10))))

This will even work for numbers like 999999999.


Andr�
-- 
From: =?ISO-8859-15?Q?Andr=E9_Thieme?=
Subject: Re: Counting number of digits
Date: 
Message-ID: <d6u2n4$2rl$1@ulric.tng.de>
Moving into the right direction:
CL-USER> (defun count-digits (n)
	   (do ((result 0 (1+ result))
	      (number 1 (* number 10)))
	     ((< n number) result)))

CL-USER> (time (count-digits (fact 20000)))

Real time: 19.267706 sec.
Run time: 19.267706 sec.
Space: 1545941272 Bytes
GC: 2345, GC time: 11.726862 sec.
77338

Andr�
-- 
From: Arthur Lemmens
Subject: Re: Counting number of digits
Date: 
Message-ID: <opsq9qfxe4k6vmsw@news.xs4all.nl>
Andr� Thieme wrote:

> After reading the thread about cmucl crashing while calculating the
> factorial of big numbers I wanted to know out of how many digits some
> big numbers consist.

INTEGER-LENGTH should give you a good start.
From: GP lisper
Subject: Re: Counting number of digits
Date: 
Message-ID: <1116922504.b3ceb5cf2a2f163825ef0a80bda61541@teranews>
> Andr� Thieme wrote:
>
>> After reading the thread about cmucl crashing while calculating the
>> factorial of big numbers

Note that this is a solved problem, the OP had an old image.

-- 
With sufficient thrust, pigs fly fine.
From: André Thieme
Subject: Re: Counting number of digits
Date: 
Message-ID: <d72cc8$qbk$1@ulric.tng.de>
GP lisper schrieb:
>>Andr� Thieme wrote:
>>
>>
>>>After reading the thread about cmucl crashing while calculating the
>>>factorial of big numbers
> 
> 
> Note that this is a solved problem, the OP had an old image.

Now I have a problem with CLisp 2.33.1 on Win XP:

CL-USER> (defun fact (n)
            (if (< n 2)
                1
                (let ((acc 1))
                  (dotimes (x n acc) (setf acc (* acc (1+ x)))))))
FACT
CL-USER> (compile 'fact)
FACT
NIL
NIL
CL-USER> (time (floor (1+ (log (fact 20000) 10))))

results in:
floating point overflow
    [Condition of type SYSTEM::SIMPLE-FLOATING-POINT-OVERFLOW]

If I remember correctly in CMUCL this didn't happen to me.
Can anyone please confirm that this is working on his/her CLISP?


Andr�
-- 
From: Christopher C. Stacy
Subject: Re: Counting number of digits
Date: 
Message-ID: <ufywbgq22.fsf@news.dtpq.com>
(setq x (fact 20000)) => bignum
(log x 10)            => floating point overflow
From: Patrick May
Subject: Re: Counting number of digits
Date: 
Message-ID: <m2fywb843m.fsf@patrick.intamission.com>
······@news.dtpq.com (Christopher C. Stacy) writes:
> (setq x (fact 20000)) => bignum
> (log x 10)            => floating point overflow

     Same result with clisp 2.33.2-1 under Fedora Core 3.

Patrick
From: André Thieme
Subject: Re: Counting number of digits
Date: 
Message-ID: <d730md$c0c$1@ulric.tng.de>
Patrick May schrieb:
> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
>>(setq x (fact 20000)) => bignum
>>(log x 10)            => floating point overflow
> 
> 
>      Same result with clisp 2.33.2-1 under Fedora Core 3.


Okay, thank you, you two.
Perhaps Bruno or Sam are reading this...


Andr�
-- 
From: Bruno Haible
Subject: Re: Counting number of digits
Date: 
Message-ID: <d7enk7$dn0$1@laposte.ilog.fr>
> (setq x (fact 20000)) => bignum
> (log x 10)            => floating point overflow

The LOG function tries to convert its argument to a SINGLE-FLOAT first.
But SINGLE-FLOAT's exponent range is not sufficient for this.
(Remember that IEEE754 SINGLE-FLOAT is good for numbers with at most
38 digits!)

As a workaround, you can use long-floats:

  (log x 10L0) => 77337.25988195615874L0

Bruno
From: Joel Ray Holveck
Subject: Re: Counting number of digits
Date: 
Message-ID: <y7chdgsxnao.fsf@sindri.juniper.net>
> "Okay" I thought, "it should be possible to count the digits faster".
> We don't really need to convert the number to a string and then count
> its length.

You don't really need to compute the actual digits, either.
Stirling's Formula, an approximation for factorials, can do you well
here.

(defun stirling-log (n &optional upper-bound-p)
  "Returns an approximation to ln(n!), according to Stirling's Formula.
The returned value is either the lower bound or upper bound, depending
on UPPER-BOUND-P."
  (+
   (* (log n) (+ n 1/2))
   (- n)
   #.(* 1/2 (log (* 2 pi)))
   (if upper-bound-p
       (/ (* 12 n))
       0)))
(defun convert-log-base (old-base new-base n)
  "If N = (LOG X OLD-BASE), this returns (LOG X NEW-BASE)"
  (* n (log old-base new-base)))
(defun log-to-digits (logn &optional (base 10))
  "Returns the number of digits in N, given ln(n).
The optional argument BASE is the base for counting digits.  The base
of the logarithm in LOGN must be e."
  (max 1 (floor (+ 1 (convert-log-base #.(exp 1) base logn)))))
(defun n!-digits (n &optional (base 10))
  "Returns the number of digits in n!
Actually, it returns two values: the lower bound and the upper bound.
They're almost always the same, but I don't see any mathematical
certainty that there can't be a difference of 1."
  (if (= n 0)
      ;; This algorithm blows up if n=0.
      (values 1 1)
      (values (log-to-digits (stirling-log n nil) base)
	      (log-to-digits (stirling-log n t) base))))

Share and enjoy,
joelh

PS: For what it's worth, you'd need 6.10e159 bytes to store (100!)! ,
not counting the bignum overhead and space to compute it.