From: Christophe Turle
Subject: demonic numbers !
Date: 
Message-ID: <42028742$0$513$626a14ce@news.free.fr>
How to make CL (Clisp) read decimal numbers as rational ?

For example, i have to read one entry : 102.0051 but this is interpreted as 
a float ! And i don't know how to write the function F such that :

(F entry) => "102005100" which in theory is just 'multiply by 1000000'.

If the entry was a double-float it works with rationalize, but i can't 
change the inputs (and i don't want too, i don't want to know what the 
underlying number representation is !)


;; this sums up the problem ;)
> (* 102.0051 10000)
1020050.94

I have an idea which works but i don't know how to hook it :
102.0051 => "102.0051" => "102" "0051" => (+ 102 (/ 51 (expt 10 4))) => 
1020051/10000

The problem here is that i do the job of the reader a second time and after 
a potential loss of precision :

CL-USER> (read-from-string ".111111111101")
0.11111111

So, is there a way to hook this directly into the reader ??
and switching by using (setf *read-decimal-as* :rational) and (setf 
*read-decimal-as* :float)



-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 

From: Pascal Bourguignon
Subject: Re: demonic numbers !
Date: 
Message-ID: <87zmylcqgo.fsf@thalassa.informatimago.com>
"Christophe Turle" <······@nospam.com> writes:

> How to make CL (Clisp) read decimal numbers as rational ?
> 
> For example, i have to read one entry : 102.0051 but this is interpreted as 
> a float ! And i don't know how to write the function F such that :
> 
> (F entry) => "102005100" which in theory is just 'multiply by 1000000'.
> 
> If the entry was a double-float it works with rationalize, but i can't 
> change the inputs (and i don't want too, i don't want to know what the 
> underlying number representation is !)

Merely, by introducing a reader macro that will do the rational reading.
For example, you could use:

    #/123.4567 --> 1234567/1000

You might get inspiration from the currency syntax macro in my
COM.INFORMATIMAGO.COMMON-LISP.INVOICE package.

See: http://www.informatimago.com/develop/lisp/#cvs
for CVS checkout instructions.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42051188$0$14163$626a14ce@news.free.fr>
"Pascal Bourguignon" <····@mouse-potato.com> a �crit dans le message de 
news: ··············@thalassa.informatimago.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> How to make CL (Clisp) read decimal numbers as rational ?
>>
>> For example, i have to read one entry : 102.0051 but this is interpreted 
>> as
>> a float ! And i don't know how to write the function F such that :
>>
>> (F entry) => "102005100" which in theory is just 'multiply by 1000000'.
>>
>> If the entry was a double-float it works with rationalize, but i can't
>> change the inputs (and i don't want too, i don't want to know what the
>> underlying number representation is !)
>
> Merely, by introducing a reader macro that will do the rational reading.
> For example, you could use:
>
>    #/123.4567 --> 1234567/1000

I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input 
for example. So i can't introduce reader macros :(

Actually, i'm binding *read-default-float-format*

(let ((*read-default-float-format* 'double-float))
   (read-from-string "(x 123.2569 y 1256.3588)" ))


But the problem comes again if double (or longer) float is not sufficient...


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Pascal Bourguignon
Subject: Re: demonic numbers !
Date: 
Message-ID: <87r7jubxvn.fsf@thalassa.informatimago.com>
"Christophe Turle" <······@nospam.com> writes:
> > Merely, by introducing a reader macro that will do the rational reading.
> > For example, you could use:
> >
> >    #/123.4567 --> 1234567/1000
> 
> I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input 
> for example. So i can't introduce reader macros :(

The lisp reader is good to read lisp data.  If you want to read
application specific data, then you have to implement your own reader.
Don't be misled by the parentheses.  Check the Dragon Book.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

This is a signature virus.  Add me to your signature and help me to live
From: William Bland
Subject: Re: demonic numbers !
Date: 
Message-ID: <pan.2005.02.05.20.26.56.765281@abstractnonsense.com>
On Sat, 05 Feb 2005 20:38:36 +0100, Pascal Bourguignon wrote:

> "Christophe Turle" <······@nospam.com> writes:
>> > Merely, by introducing a reader macro that will do the rational reading.
>> > For example, you could use:
>> >
>> >    #/123.4567 --> 1234567/1000
>> 
>> I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input 
>> for example. So i can't introduce reader macros :(
> 
> The lisp reader is good to read lisp data.  If you want to read
> application specific data, then you have to implement your own reader.
> Don't be misled by the parentheses.  Check the Dragon Book.


It's a bit messy, but the following seems to work ok for me:

(defvar *digits* '(#\0 #\1 #\2 #\3 #\4 #\5 #\6 #\7 #\8 #\9 #\.))

(defun read-rational (stream char)
  "read all decimals as rationals"
  (let ((this-char nil)
	(result (position char *digits*))
	(n 0)
	(position-of-dot nil))
    (loop while (member (peek-char nil stream) *digits*) do
	 (setf this-char (read-char stream))
	 (if (eql this-char #\.)
	     (setf position-of-dot n)
	     (setf result (+ (* 10 result) (position this-char *digits*))))
	 (incf n))
    (if position-of-dot
	(/ result (expt 10 (1- (- n position-of-dot))))
	result)))

(mapcar (lambda (c)
	  (set-macro-character c #'read-rational))
	(remove #\. *digits*)) ;; \#. is used for other things!


Best wishes,
		Bill.
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <420609b3$0$1009$626a14ce@news.free.fr>
"William Bland" <·······@abstractnonsense.com> a �crit dans le message de 
news: ······························@abstractnonsense.com...
> On Sat, 05 Feb 2005 20:38:36 +0100, Pascal Bourguignon wrote:
>
>> "Christophe Turle" <······@nospam.com> writes:
>>> > Merely, by introducing a reader macro that will do the rational 
>>> > reading.
>>> > For example, you could use:
>>> >
>>> >    #/123.4567 --> 1234567/1000
>>>
>>> I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" 
>>> input
>>> for example. So i can't introduce reader macros :(
>>
>> The lisp reader is good to read lisp data.  If you want to read
>> application specific data, then you have to implement your own reader.
>> Don't be misled by the parentheses.  Check the Dragon Book.
>
>
> It's a bit messy, but the following seems to work ok for me:
>
> (defvar *digits* '(#\0 #\1 #\2 #\3 #\4 #\5 #\6 #\7 #\8 #\9 #\.))
>
> (defun read-rational (stream char)
>  "read all decimals as rationals"
>  (let ((this-char nil)
> (result (position char *digits*))
> (n 0)
> (position-of-dot nil))
>    (loop while (member (peek-char nil stream) *digits*) do
> (setf this-char (read-char stream))
> (if (eql this-char #\.)
>      (setf position-of-dot n)
>      (setf result (+ (* 10 result) (position this-char *digits*))))
> (incf n))
>    (if position-of-dot
> (/ result (expt 10 (1- (- n position-of-dot))))
> result)))
>
> (mapcar (lambda (c)
>   (set-macro-character c #'read-rational))
> (remove #\. *digits*)) ;; \#. is used for other things!

Seeing digits as macro characters ! Yes, it's a very good idea and it is CL 
standard, cool !

thanks for the hack ;)

just to be sure that all cases are handled. For example if input is '3gg', i 
have to check this ...

-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: William Bland
Subject: Re: demonic numbers !
Date: 
Message-ID: <pan.2005.02.07.01.09.25.983698@abstractnonsense.com>
On Sun, 06 Feb 2005 13:12:31 +0100, Christophe Turle wrote:
> 
> Seeing digits as macro characters ! Yes, it's a very good idea and it is CL 
> standard, cool !
> 
> thanks for the hack ;)
> 

Glad you liked it.  Here's another one I just wrote that seems relevant to
this thread (scroll to the end for an example of its use):

(defvar *digits* '(#\0 #\1 #\2 #\3 #\4 #\5 #\6 #\7 #\8 #\9 #\.))

(defclass number-with-error-bar ()
  ((number
    :initarg :number
    :initform (error "Must supply a number."))
   (error-bar
    :initarg :error-bar
    :initform 0)))

(defmethod print-object ((x number-with-error-bar) stream)
  (format stream "~A+/-~A" (slot-value x 'number) (slot-value x 'error-bar)))

(defun plus (&rest numbers)
  (let ((result 0)
	(result-error 0))
    (dolist (n numbers)
      (typecase n
	(number
	 (incf result n))
	(number-with-error-bar
	 (incf result (slot-value n 'number))
	 (incf result-error (slot-value n 'error-bar)))
	(t
	 (error "Cannot add ~S" n))))
    (if (= result-error 0)
	result
	(make-instance 'number-with-error-bar
		       :number result :error-bar result-error))))

(defun read-number-with-error-bar (stream char)
  "read all numbers as numbers with error bars"
  (let ((this-char nil)
	(result (position char *digits*))
	(n 0)
	(position-of-dot nil))
    (loop while (member (peek-char nil stream) *digits*) do
	 (setf this-char (read-char stream))
	 (if (eql this-char #\.)
	     (setf position-of-dot n)
	     (setf result (+ (* 10 result) (position this-char *digits*))))
	 (incf n))
    (if position-of-dot
	(make-instance 'number-with-error-bar
		       :number (/ result (expt 10 (- n position-of-dot 1)) 1.0)
		       :error-bar (/ 0.5 (expt 10 (- n position-of-dot 1))))
	result)))

(mapcar (lambda (c)
	  (set-macro-character c #'read-number-with-error-bar))
	(remove #\. *digits*))

(set-macro-character #\+
		     (lambda (stream char)
		       (declare (ignore char))
		       (let ((result (make-array '(1)
						 :element-type 'base-char
						 :fill-pointer t :adjustable t
						 :initial-contents '(#\+))))
			 (loop until (member (peek-char nil stream)
					     '(#\( #\) #\Space
					       #\Newline #\Tab)) do
			      (vector-push-extend (read-char stream) result))
			 (if (equal result "+")
			     'plus
			     (intern result)))))

;; end of file

CL-USER> 1
1
CL-USER> 1.0
1.0+/-0.05
CL-USER> 1.00
1.0+/-0.005
CL-USER> (+ 1 2.0 3.00 4.000)
10.0+/-0.0555
CL-USER> (reduce #'+ (make-list 100 :initial-element 1.0))
100.0+/-5.000001
CL-USER> '+
PLUS
CL-USER> '++
++
CL-USER> '+a-symbol+
|+a-symbol+|


Some people say it's a good thing for a language to be "objects all the
way down".  Lisp is much more than that - it's programmable all the way
down.

Cheers,
	Bill.
From: Pascal Bourguignon
Subject: Re: demonic numbers !
Date: 
Message-ID: <871xbrat4l.fsf@thalassa.informatimago.com>
William Bland <·······@abstractnonsense.com> writes:
> CL-USER> (reduce #'+ (make-list 100 :initial-element 1.0))
> 100.0+/-5.000001

Oops, it seems you have an error on the error here...


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Small brave carnivores
Kill pine cones and mosquitoes
Fear vacuum cleaner
From: William Bland
Subject: Re: demonic numbers !
Date: 
Message-ID: <pan.2005.02.11.00.34.02.676862@abstractnonsense.com>
On Tue, 08 Feb 2005 17:55:38 +0100, Pascal Bourguignon wrote:

> 
> William Bland <·······@abstractnonsense.com> writes:
>> CL-USER> (reduce #'+ (make-list 100 :initial-element 1.0))
>> 100.0+/-5.000001
> 
> Oops, it seems you have an error on the error here...

Heh, yeah, maybe I should include an estimate for the error in the error
term too ;-)

Incidentally, I'm not sure if this came across in any of my earlier posts,
but I can't really imagine using this number-with-error-bar thing myself
(or the trick to force reading of floats as rationals), but I *love* the
fact that Lisp allows me to reach into the language and redefine something
so basic.  Even if I can't imagine wanting to do it *now*, it's nice to
know I could do it if I ever needed to.

Cheers,
	Bill.
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <876510l8f5.fsf@nyct.net>
William Bland <·······@abstractnonsense.com> writes:

> CL-USER> '+
> PLUS

I think you broke the following in the process:

CL-USER> +

(define-symbol-macro plus +) might work to rectify the situation.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: William Bland
Subject: Re: demonic numbers !
Date: 
Message-ID: <pan.2005.02.10.23.34.16.786642@abstractnonsense.com>
On Thu, 10 Feb 2005 10:51:58 -0500, Rahul Jain wrote:

> William Bland <·······@abstractnonsense.com> writes:
> 
>> CL-USER> '+
>> PLUS
> 
> I think you broke the following in the process:
> 
> CL-USER> +
> 
> (define-symbol-macro plus +) might work to rectify the situation.

Good catch - your fix seems to work just fine, thanks.

Cheers,
	Bill.
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42052aa8$0$14158$626a14ce@news.free.fr>
"Pascal Bourguignon" <····@mouse-potato.com> a �crit dans le message de 
news: ··············@thalassa.informatimago.com...
> "Christophe Turle" <······@nospam.com> writes:
>> > Merely, by introducing a reader macro that will do the rational 
>> > reading.
>> > For example, you could use:
>> >
>> >    #/123.4567 --> 1234567/1000
>>
>> I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input
>> for example. So i can't introduce reader macros :(
>
> The lisp reader is good to read lisp data.  If you want to read
> application specific data, then you have to implement your own reader.

I see your point. Except that there's no way to write a number with a 
decimal point in Lisp :(

0.111111 is not interpreted by Lisp (and other programming languages as 
well)readers as it is for human readers :(

Is there a Lisp implementation which allows reading 0.11111 as it should ?

imho, 0.11111f should be interpreted as a float not 0.11111.

But i don't want to re-open a recent flame thread about this point, even if 
... ;)


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87mzui6uos.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> Is there a Lisp implementation which allows reading 0.11111 as it should ?

All do. Look at the paper on the META parser and just don't convert to
float in the stage where you compute the value. You'll end up with
11111/100000 as you "should".

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <4206079d$0$949$626a14ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> "Christophe Turle" <······@nospam.com> writes:
>
>> Is there a Lisp implementation which allows reading 0.11111 as it should 
>> ?
>
> All do. Look at the paper on the META parser and just don't convert to
> float in the stage where you compute the value. You'll end up with
> 11111/100000 as you "should".

META : http://home.pipeline.com/~hbaker1/Prag-Parse.html

What i see, is a parser generator which can produce the 0.11111 -> 
11111/100000. Ok

So if i understand you well is to read the input string with the generated 
parser. This is ok if i thought that my input was application specific.

But what i think is that the lisp reader is flawed. For me, it must 
interpret "0.11111" as a rational and not as a float. In all cases ! So my 
request is how to hack the lisp reader (even with a specific lisp 
implementation) at the 'token' level (i think), so the token "0.11111" is 
interpreted as i want it to be.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Peter Seibel
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3oeexennu.fsf@javamonkey.com>
"Christophe Turle" <······@nospam.com> writes:

> So if i understand you well is to read the input string with the
> generated parser. This is ok if i thought that my input was
> application specific.

Where is this input coming from?

> But what i think is that the lisp reader is flawed. For me, it must
> interpret "0.11111" as a rational and not as a float. In all cases !

This doesn't make any sense to me. How is it that you have big piles
of Lisp code written in a dialect that is just like Common Lisp except
it expects numbers with a decimal point to be interpreted with a
different syntax?

If you're not talking about Lisp code, then you don't need to change
the Lisp reader, you just need to write a reader that implements the
syntax you want to use when reading whatever these files are that you
have. You can reuse quite a bit of the existing reader, or at least
the functions on top of which it is built (READ-CHAR, INTERN, etc.)

> So my request is how to hack the lisp reader (even with a specific
> lisp implementation) at the 'token' level (i think), so the token
> "0.11111" is interpreted as i want it to be.

That's not possible--the syntax of numbers and symbols is hard wired
into the reader and is not customizable. That

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Peter Seibel
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3k6plen37.fsf@javamonkey.com>
Peter Seibel <·····@javamonkey.com> writes:

> That's not possible--the syntax of numbers and symbols is hard wired
> into the reader and is not customizable. That

Okay, I spoke too fast--as others have pointed out you can define read
macros on all the digits and the decimal point in order to hijack
this.

-Peter

P.S. I recall some time back I fellow who wanted to read a bunch of
Lisp source that contained package-qualified symbol names without
necessarily defining the packages (He wanted to do some level of
analysis on the source code.) Would this same trick work--define a
read macro on all the consitiuent characters to hijack the
construction of tokens so as to read over colons and handle the
interning of names yourself? I guess it would though without the
package definitions you could lose track of certain
identities--FOO:BAR may be the same as BAR in some contexts, etc.

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42064602$0$945$626a14ce@news.free.fr>
"Peter Seibel" <·····@javamonkey.com> a �crit dans le message de news: 
··············@javamonkey.com...
> "Christophe Turle" <······@nospam.com> writes:
>
> Where is this input coming from?
>
> This doesn't make any sense to me. How is it that you have big piles
> of Lisp code written in a dialect that is just like Common Lisp except
> it expects numbers with a decimal point to be interpreted with a
> different syntax?

I do a copy/paste from tables in pdf files. To be more precise these tables 
map channel numbers to frequencies. It is used in TV monitoring app. After 
that i enclosed the paste between "( and ")

The only thing to do is to read this and interpret this in the good 
structures. The problem is that Lisp number interpretation is not compatible 
with human one. And i think that this is a bad design choice.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87acqcl8n3.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> The only thing to do is to read this and interpret this in the good 
> structures. The problem is that Lisp number interpretation is not compatible 
> with human one. And i think that this is a bad design choice.

You clearly haven't taken any quantitative science courses then, such as
chemistry. In such disciplines, numbers as you show them are interpreted
EVEN MORE imprecisely than what the Lisp reader does.

0.11111 is interpreted as any number between 0.111105 and 0.111115. Is
that we should respecify the lisp reader to do? Because I'm sure a
chemist who thought like you would claim that.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <N9WdnSvBjI_CVpbfRVn-1g@golden.net>
Rahul Jain wrote:
> 0.11111 is interpreted [in quantitative science] as any number
> between 0.111105 and 0.111115. Is that we should respecify the lisp
> reader to do? Because I'm sure a chemist who thought like you would
> claim that.

There's a good argument to be made for that. The point is, most people
would want the computer to operate on either your interpretation above
with error bounds carried through the calculation, or exactly the number
the user entered. Few people expect the computer to carry out
approximate calculations with very strange rounding rules and no
corresponding calculation of error intervals. Even those schooled in the
art, who claim that that is exactly what they are expecting,
occasionally get bitten, and badly.

> Are you saying that non-specialists and non-compilers read code on a 
> regular basis?

I say that non-specialists WRITE code on a regular basis. You can define
non-specialist as either someone with no formal computer science
education or someone with a CS degree but no numerical analysis
training, it doesn't matter to me. If a person is coding floating point
but is incapable of performing the error analysis, that person is a
non-specialist.

> Computers do math. They are _computing_ machines, particularly when
> it comes to numbers.

Not really. It is possible to mathematically express what the computer
does when it evaluates (+ float1 float2), but it's PAGES of math, and if
you showed it to a mathematician, he'd declare it not aesthetically
pleasing. When it comes to FP, it would be more accurate to say that
math can model what computers do.

> Most importantly, they are mostly used for engineering or financial
> math. Small rounding differences are insignificant compared to speed
> in those applications. If my calculation says to use 153.222m instead
> of 153.221m of wire on a bridge, is that a big deal? If my 
> calculation tells me to buy some stock at 15.53 instead of 15.531 is 
> that a big deal? The important thing, especially for the financial 
> situation, is to get the result _fast_ so that the people doing the
> work can get on with it and make money.

> Most financial data is the result of approximate solving of large
> matrix equations or of partial differential equations. Time is also
> rounded, sometimes to the nanosecond, sometimes to the millisecond,
> sometimes to the second, sometimes to the minute, sometimes to the
> day, sometimes to the year... and probably off from UTC by as much as
> a second or two if it claims to be rounded to the millisecond...
> sometimes worse.

Finance first, since I happen to be wearing an Algorithmics shirt today.
Since antiquity, the vast majority of practitioners of financial math
have preferred exact accuracy over speed, and the few who didn't were
often subjected to punitive measures. Even today, the majority wouldn't
know e the natural logarithm base from E the musical note. Yes, quants
in the money centers, amateur quants at home and risk managers
everywhere are using some pretty fancy math, but it was all invented
after computers were, and computers were the enabler. Those people,
those computers and that math represent a small percentage of computers'
use in finance (though undoubtedly a large percentage of the actual
numerical operations performed, on about fifty computers worldwide). The
majority of financial data is still boring old tuples like
date,quantity,price
date,account,amount
date,open,high,low,close,volume
Duane Rettig has pointed out that he has customers in finance who don't
like float rounding.

Time: In the business world, time rounding is done according to law or
custom. Business users do not think of the values in their date fields
as mere approximations of the exact time. They are, for legal purposes,
the time the transaction was done. Any simple arithmetic on dates which
doesn't take law and custom into account is likely to be wrong. Science
is different, obviously, so the question comes down to what percentage
of time field data is business and what percentage scientific. But the
whole time thing is another red herring (like yesterday's number
theoretic excursion) unless people are entering times as decimal numbers
rather than formatted strings.

For any engineering result you give me with what you believe to be an
acceptable error (say 1mm) I can give a plausible next use of that
number which will multiply the error into the unacceptable range. "Small
rounding differences are insignificant compared to speed?" I don't think
so. In engineering, if you're worried about speed, it must be because
you're doing millions or billions of calculations, which usually means
finite element modeling and rigorous error analysis, meaning calling in
the professionals or using somebody else's numerics package, or both.
Smaller problems finish more or less instantaneously, meaning our number
formats could get quite a bit less efficient before the users noticed.
And error analysis is taken quite a bit more seriously in engineering
than you seem to believe.
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87acqcf68k.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> Rahul Jain wrote:
>> 0.11111 is interpreted [in quantitative science] as any number
>> between 0.111105 and 0.111115. Is that we should respecify the lisp
>> reader to do? Because I'm sure a chemist who thought like you would
>> claim that.
>
> There's a good argument to be made for that. The point is, most people
> would want the computer to operate on either your interpretation above
> with error bounds carried through the calculation, or exactly the number
> the user entered. Few people expect the computer to carry out
> approximate calculations with very strange rounding rules and no
> corresponding calculation of error intervals.

Which few? The ones writing software or the ones not?

> Even those schooled in the art, who claim that that is exactly what
> they are expecting, occasionally get bitten, and badly.

Sure, but at least it makes their project feasible. At least they got
1500 rockets off even if it means that 1 went off on a bad trajectory. 
Otherwise, they'd have never gotten even 1 off.

>> Are you saying that non-specialists and non-compilers read code on a
>> regular basis?
>
> I say that non-specialists WRITE code on a regular basis. You can define
> non-specialist as either someone with no formal computer science
> education or someone with a CS degree but no numerical analysis
> training, it doesn't matter to me. If a person is coding floating point
> but is incapable of performing the error analysis, that person is a
> non-specialist.

Then they're either not doing enough math for rounding errors to
accumulate or they're not using a wide enough range of magnitudes. FP
will be fine for them as long as they print out the number with the
precision they expect it to have.

> The  majority of financial data is still boring old tuples like
> date,quantity,price
> date,account,amount
> date,open,high,low,close,volume
> Duane Rettig has pointed out that he has customers in finance who don't
> like float rounding.

They don't need more than 53 bits of precision to store those values. 
Any computations done with them won't span many orders of magnitude,
either. The entire trading on NASDAQ isn't even enough to cause more
than a few pennies of error if you add up each share's trading price
separately to compute total dollar volume in a day. In the bond market,
the numbers are bigger, but the values are in the 1000s, so you have the
same relative magnitudes. And then they round those numbers to the
nearest $1000 in the end anyway.

> Time: In the business world, time rounding is done according to law or
> custom. Business users do not think of the values in their date fields
> as mere approximations of the exact time. They are, for legal purposes,
> the time the transaction was done. Any simple arithmetic on dates which
> doesn't take law and custom into account is likely to be wrong.

Yes, exactly. That was my point. It doesn't make sense to use FP for
times because you're not adding numbers here. You're moving around in a
multi-dimensional space with intricate topology. Was 5/15/1986 a valid
settlement date for an equities trade? In Japan or the US?

> For any engineering result you give me with what you believe to be an
> acceptable error (say 1mm) I can give a plausible next use of that
> number which will multiply the error into the unacceptable range. "Small
> rounding differences are insignificant compared to speed?" I don't think
> so. In engineering, if you're worried about speed, it must be because
> you're doing millions or billions of calculations, which usually means
> finite element modeling and rigorous error analysis, meaning calling in
> the professionals or using somebody else's numerics package, or both.

And go figure, it's those professionals (either hired for the task or to
write the library) are the ones writing the most numerically- and
performance-sensitive code.

> Smaller problems finish more or less instantaneously, meaning our number
> formats could get quite a bit less efficient before the users noticed.

And smaller problems use fewer numbers, meaning FP roundoffs and
magnitude differences don't have time to accumulate.

> And error analysis is taken quite a bit more seriously in engineering
> than you seem to believe.

You mean they care if they're off by 100 baryons when they compute the
number of baryons in the universe? (Sorry to be so pedantically precise,
but "particle" just doesn't cut it here.)

FP is more precise than their measurements, I guarantee it.

The only issue I've seen other than foolishly using programmer-oriented
output routines for user interfaces is non-portability in results that
is actually insignificant (10ths of a penny, even on the largest trades
in history... and at that point they don't exactly care about pennies,
let alone dollars... they'd gladly buy the trader a $30 lunch to get the
deal through).

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Kent M Pitman
Subject: Re: demonic problem descriptions [was Re: demonic numbers !]
Date: 
Message-ID: <umzui3zii.fsf_-_@nhplace.com>
"Christophe Turle" <······@nospam.com> writes:

> I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input 
> for example. So i can't introduce reader macros :(

No, that's not so.  That is, the assertion in your first sentence is not,
as you seem to suggest, the result of some logical implication from the
premise in the second sentence to an unavoidable conclusion in the third
sentence.

For example, you could make #\0, #\1, #\2, etc. be readmacros that
could then read the rest of the number in whatever format they liked.

(Don't do it globally, of course, make your own readtable.  It won't be
 program-friendly to do this, and Rahul Jain's observation is right
 when he says "You must not use the lisp reader, then, because you are not
 reading Lisp", to the extent that you are not using readmacros.  You
 can use readmacros, of course, in any Lisp code that is cooperating with
 the syntax that the readmacros implement.)

I recall once making a readtable for FORTRAN in which every character in
the character set was a readmacro that just triggered the FORTRAN parser.
That's just an extreme trick, of course, but the point is that you have
a lot more flexibility than you think.

> Actually, i'm binding *read-default-float-format*
> 
> (let ((*read-default-float-format* 'double-float))
>    (read-from-string "(x 123.2569 y 1256.3588)" ))

Not that this matters, since you're wanting rationals.  You're now dealing
in approximations.  Though you may be able to help yourself a little by
looking at the function RATIONALIZE.

But look here, the real issue is something you're evading or that you
addressed before I started reading this thread:

I'll begin with a bold claim:  A priori, no notation means anything.

Things ONLY mean something in the context of a person (or program)
creating text in a notation and a person (or program) interpreting that
text according to a mutually agreed set of rules.

This may seem obvious but I've got to wonder:  Who is making up this text
that you cannot change?

People don't normally parenthesize things like that.  They usually use
commas between items in lists, and usually do not put parens around lists
for that matter.  So it sounds like someone with just a little too much
bad or partial knowledge of Lisp (possibly to include a well-meaning but
confused instructor or textbook), and that your initial constraint is the
real bug.  It sounds like a fake constraint, and/or a job for a sed script.
Real programming does not have constraints like "you may not modify the
input data".  Input data is there FOR THE PURPOSE OF being modified, and
in real programs, it is often cascaded through multiple filters before 
getting to a form that it can be processed.

I once worked on a project for some people doing Gamma ray spectra analysis,
before the days of scanners and OCR technology, and someone wanted to turn
a science book full of data into something we could use in our programs.
Knowing there was Lisp involved, they told some secretary to "type in the
book" and "to use lots of parens because Lisp likes that".  So tables got
typed in with parens here and there, but this was NOT lisp, and not really
even helpful.  There were no conventions for what to do about missing columns,
funny chars like plus-or-minus, etc.  The typist just "got creative" and
often not consistently.  You problem sounds like a similar problem to what
I had, only "in the small". But I can tell you the ONLY thing that allowed me
to get the book parsed was that I was allowed to use all kinds of strategies
for transformation.  Any kind of statement like "and you can't use readmacros"
would have been unreasonably restrictive and killed the project.  You might as
well say "can't use right parens" or "can't use xor" or "can't use a jump
instruction".  To what end?

There is a right time and a wrong time to solve a problem as explained.

It sounds to me like your BEST options are:

 (a) to question whether the upstream program shouldn't produce
     123/1000 instead of .123 if that's what it means

 (b) to preprocess the string to change the format from .123 to 123/1000
     before applying the lisp reader

and you should be considering solutions like readmacros and/or use of 
double-floats + rationalize only if someone has tied your hands and made
it clear that they intend to make your life thus miserable.
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87sm496fd8.fsf@nyct.net>
Kent M Pitman <······@nhplace.com> writes:

> For example, you could make #\0, #\1, #\2, etc. be readmacros that
> could then read the rest of the number in whatever format they liked.
[...]
> I recall once making a readtable for FORTRAN in which every character in
> the character set was a readmacro that just triggered the FORTRAN parser.

Yikes, Kent! You really scare me sometimes. At least what's needed to
solve this problem doesn't require a huge amount of readtable hacking. 
All the digits and the decimal point can be mapped to a readmacro that
simply invokes the parser he needs.

Ingenious, actually!

As a side note, I think he'll want them to be NON-terminating macro
characters, but I don't know the details of the syntax he's reading.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic problem descriptions [was Re: demonic numbers !]
Date: 
Message-ID: <42063656$0$961$626a14ce@news.free.fr>
"Kent M Pitman" <······@nhplace.com> a �crit dans le message de news: 
················@nhplace.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input
>> for example. So i can't introduce reader macros :(
>
> No, that's not so.
> For example, you could make #\0, #\1, #\2, etc. be readmacros that
> could then read the rest of the number in whatever format they liked.

Yes, you are right (I was thinking at pascal #/ reader macro). As written by 
william bland, using the digits as reader macros is a good idea.


> Things ONLY mean something in the context of a person (or program)
> creating text in a notation and a person (or program) interpreting that
> text according to a mutually agreed set of rules.


I agree. And the context for Lisp should be compatible with human context. 
default float interpretation is not compatible with human reading. That's my 
point.


> This may seem obvious but I've got to wonder:  Who is making up this text
> that you cannot change?

I have added the parenthesis around a string pasted from a pdf file.

> There is a right time and a wrong time to solve a problem as explained.
>
> It sounds to me like your BEST options are:
>
> (a) to question whether the upstream program shouldn't produce
>     123/1000 instead of .123 if that's what it means

no, it is a copy/paste from a pdf file.

>
> (b) to preprocess the string to change the format from .123 to 123/1000
>     before applying the lisp reader

This means parsing two times the input. It is acceptable if the input format 
is application specific. But as i said before, 0.1111 shouldn't be read as a 
float in Lisp. It is bad design choice.

Did you write 0.123 or 123/1000 in your papers ? Except when speaking about 
lisp ;)



-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr)
From: Christophe Rhodes
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <sqpszdis5r.fsf@cam.ac.uk>
"Christophe Turle" <······@nospam.com> writes:

> I agree. And the context for Lisp should be compatible with human context. 
> default float interpretation is not compatible with human reading. That's my 
> point.

If that's your point, it's not a very good one.  I am a human (I know,
I know, I could be a dog); I read Lisp quite a lot, and I can
interpret the notation of tokens as floating point numbers.

I actually don't believe that the solution to the "Naïve Reader"
problem is to force everyone to descend to the level of Most Common
Naïveté; instead, I'd suggest that the naïve reader use this to
educate themselves.

Christophe
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <DoydnXTOU44wz5vfRVn-ug@golden.net>
Christophe Rhodes wrote:
> "Christophe Turle" <······@nospam.com> writes:
> 
> 
>> I agree. And the context for Lisp should be compatible with human
>> context. default float interpretation is not compatible with human
>> reading. That's my point.
> 
> 
> If that's your point, it's not a very good one.  I am a human (I
> know, I know, I could be a dog); I read Lisp quite a lot, and I can 
> interpret the notation of tokens as floating point numbers.

I doubt that you can look at any given number and compute in your head
the error that will be introduced if converted to a binary single or
double float. Or by "interpret" do you just mean that you know the
number will be perturbed as it is read into memory, and you're OK with that?

> I actually don't believe that the solution to the "Naïve Reader" 
> problem is to force everyone to descend to the level of Most Common 
> Naïveté; instead, I'd suggest that the naïve reader use this to 
> educate themselves.

Your view of software engineering, to wit giving the user sharp tools
that don't work the way a numerate amateur would expect, then waiting
for him to discover that they're dangerous and broken, has led and will
continue to lead to disasters large and small, along with miscellaneous
annoyingly unreliable systems. Other threads here have recently shown
that essentially nobody, even those attending the world's best computer
science schools, gets mandatory instruction in the the mysteries of the
binary floats that programmers use every day. What event led to your
education in these matters?

> This is just silly, now.  In my daily work, if I write 0.1111, I
> imply something different from if I write 1111/10000, and the
> community with which I communicate understands this distinction.  (It
> isn't quite the same distinction as the distinction that the default
> Lisp reader draws, but it is prety close.)  You seem to be implying
> that your personal vocabulary is somehow the unique "human
> convention" -- and, I'm sorry to have to tell you, it isn't.

What do you and the other humans you interact with interpret 0.1111 as
meaning? Is there an implied +/-0.00005? An implied
+/-0.00000000000000005? It seems to me that you're already admitting
that your computer doesn't interpret your 0.1111 in the same way as the
people in your field do. Do your programs take steps to interpret 0.1111
differently from the way Common Lisp normally would? Or do they just do
the wrong thing (according to your community's conventions) and produce
questionable output whose error is not quantified?
From: Christophe Rhodes
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <sqekftiny1.fsf@cam.ac.uk>
Cameron MacKinnon <··········@clearspot.net> writes:

> Christophe Rhodes wrote:
>> "Christophe Turle" <······@nospam.com> writes:
>> If that's your point, it's not a very good one.  I am a human (I
>> know, I know, I could be a dog); I read Lisp quite a lot, and I can
>> interpret the notation of tokens as floating point numbers.
>
> I doubt that you can look at any given number and compute in your head
> the error that will be introduced if converted to a binary single or
> double float. 

Doubt, by all means.  I don't think it's that hard, but I'll happily
concede that it's not what I typically do.

> Or by "interpret" do you just mean that you know the number will be
> perturbed as it is read into memory, and you're OK with that?

If there is an exact number which, for some reason or other, is going
to be used in a computer calculation as a floating point number, then
the act of reading it in will introduce a source of error (to be
accounted for along with all the other sources of error in the
experiment).  It's not the disaster that people in this thread seem to
be thinking it is, as long as the error is quantified and verified to
be small in comparison with other sources of error, which it often --
though of course not always -- is.

>> I actually don't believe that the solution to the "Naïve Reader"
>> problem is to force everyone to descend to the level of Most Common
>> Naïveté; instead, I'd suggest that the naïve reader use this to
>> educate themselves.
>
> Your view of software engineering, to wit giving the user sharp tools
> that don't work the way a numerate amateur would expect, then waiting
> for him to discover that they're dangerous and broken, has led and will
> continue to lead to disasters large and small, along with miscellaneous
> annoyingly unreliable systems.

I'm quite happy for other people to be kept away from the knives,
though I'd continue to encourage them to learn how to use knives
safely, because if they do they will be able to produce nice sushi.

> Other threads here have recently shown that essentially nobody, even
> those attending the world's best computer science schools, gets
> mandatory instruction in the the mysteries of the binary floats that
> programmers use every day. What event led to your education in these
> matters?

No "event" led to my education in these matters, unless you count my
being born.  Maybe I had the good fortune to be in an environment
where knowledge was prized rather than dismissed or viewed simply as a
means to a paycheque.  (I'm not a computer scientist by training;
while I was studying Physics, those who were learning Computer Science
learnt about floating point representations and Numerical Analysis as
part of their degree, and this hasn't changed.)

>> This is just silly, now.  In my daily work, if I write 0.1111, I
>> imply something different from if I write 1111/10000, and the
>> community with which I communicate understands this distinction.  (It
>> isn't quite the same distinction as the distinction that the default
>> Lisp reader draws, but it is prety close.)  You seem to be implying
>> that your personal vocabulary is somehow the unique "human
>> convention" -- and, I'm sorry to have to tell you, it isn't.
>
> What do you and the other humans you interact with interpret 0.1111 as
> meaning? Is there an implied +/-0.00005? An implied
> +/-0.00000000000000005? 

The first is closer, but then again I suspect we would be using +/-
with different meanings.

0.1111 as is would be something like "this number has been observed as
being about 1111/10000, and we don't have any justification for any
more significance".

0.1111 +/- 0.00005 means something subtly different: "a quantity has
been measured to be 1111/10000, and experiments suggest that the
true value of the quantity has about a 60% chance of lying within
5/100000 of the measured value".

And "1111/10000" means "exactly 1111/10000, from some theoretical
calculation".

> It seems to me that you're already admitting that your computer
> doesn't interpret your 0.1111 in the same way as the people in your
> field do. Do your programs take steps to interpret 0.1111
> differently from the way Common Lisp normally would? Or do they just
> do the wrong thing (according to your community's conventions) and
> produce questionable output whose error is not quantified?

If I'm reporting a scientific result, of course I take the care to
compute an error estimate.  Not to do such would not only be
irresponsible, it would result in my work not being published, or if
published it would either be ignored or be rebutted.

Christophe
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <AsydndOmld2A6pvfRVn-uw@golden.net>
Christophe Rhodes wrote:
> If there is an exact number which, for some reason or other, is going
>  to be used in a computer calculation as a floating point number, 
> then the act of reading it in will introduce a source of error (to be
>  accounted for along with all the other sources of error in the 
> experiment).  It's not the disaster that people in this thread seem 
> to be thinking it is, as long as the error is quantified and verified
>  to be small in comparison with other sources of error, which it 
> often -- though of course not always -- is.

It's not even USUALLY a disaster in common practice (which is to say,
when no error analysis is done). The most insidious thing about floating
point is that it can be abused horribly and still give good results
99.99% of the time. And so, of course, most everyone abuses it horribly.
Occasionally there's a disaster. One can google for "floating
point"+disaster. Patriot missile malfunction kills 28, anyone?

> No "event" led to my education in these matters, unless you count my 
> being born.  Maybe I had the good fortune to be in an environment 
> where knowledge was prized rather than dismissed or viewed simply as
>  a means to a paycheque.  (I'm not a computer scientist by training;
>  while I was studying Physics, those who were learning Computer 
> Science learnt about floating point representations and Numerical 
> Analysis as part of their degree, and this hasn't changed.)

Your alma mater's CS department seems exceptional in that regard. When I
learned about FP it was done in software (on all the equipment available
to me) and implementing one's own fixed point representation to meet
given speed and accuracy targets wasn't unusual. This naturally led to
more awareness about the costs and benefits of floating point.

>> It seems to me that you're already admitting that your computer 
>> doesn't interpret your 0.1111 in the same way as the people in your
>>  field do. Do your programs take steps to interpret 0.1111 
>> differently from the way Common Lisp normally would? Or do they 
>> just do the wrong thing (according to your community's conventions)
>>  and produce questionable output whose error is not quantified?
> 
> 
> If I'm reporting a scientific result, of course I take the care to 
> compute an error estimate.  Not to do such would not only be 
> irresponsible, it would result in my work not being published, or if 
> published it would either be ignored or be rebutted.

Well, you have a better understanding of the representation than the
vast majority of computer users. I hope that you're never burned by a
calculation you make whose error estimate (which you won't compute,
because the work isn't for publication, or is only a "back of the
envelope" calculation) ends up far higher than thought, spoiling your
answers undetected. I wish more coders (or their tools) were as
conscientious as you are.
From: Christophe Rhodes
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <sq7jllifjq.fsf@cam.ac.uk>
Cameron MacKinnon <··········@clearspot.net> writes:

> Christophe Rhodes wrote:
>>> It seems to me that you're already admitting that your computer
>>> doesn't interpret your 0.1111 in the same way as the people in your
>>>  field do. Do your programs take steps to interpret 0.1111
>>> differently from the way Common Lisp normally would? Or do they
>>> just do the wrong thing (according to your community's conventions)
>>>  and produce questionable output whose error is not quantified?
>> If I'm reporting a scientific result, of course I take the care to
>> compute an error estimate.  Not to do such would not only be
>> irresponsible, it would result in my work not being published, or if
>> published it would either be ignored or be rebutted.
>
> Well, you have a better understanding of the representation than the
> vast majority of computer users. I hope that you're never burned by a
> calculation you make whose error estimate (which you won't compute,
> because the work isn't for publication, or is only a "back of the
> envelope" calculation) ends up far higher than thought, spoiling your
> answers undetected. 

Lest anyone get the wrong impression, I'm sure I've made a few
mistakes... but maybe too few to mention.

> I wish more coders (or their tools) were as conscientious as you
> are.

So do I.  So do I.

Christophe
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <877jlgi8mm.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> One can google for "floating point"+disaster. Patriot missile
> malfunction kills 28, anyone?

Patriot missile deemed impractical because exact numerical calculations
are too slow to be done in real-time, allowing the killing of tens of
thousands, anyone?

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Coby Beck
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <dVuNd.14567$tU6.6609@edtnps91>
"Christophe Rhodes" <·····@cam.ac.uk> wrote in message
···················@cam.ac.uk...
> Cameron MacKinnon <··········@clearspot.net> writes:
>> What do you and the other humans you interact with interpret 0.1111 as
>> meaning? Is there an implied +/-0.00005? An implied
>> +/-0.00000000000000005?
>
> The first is closer, but then again I suspect we would be using +/-
> with different meanings.
>
> 0.1111 as is would be something like "this number has been observed as
> being about 1111/10000, and we don't have any justification for any
> more significance".
>
> 0.1111 +/- 0.00005 means something subtly different: "a quantity has
> been measured to be 1111/10000, and experiments suggest that the
> true value of the quantity has about a 60% chance of lying within
> 5/100000 of the measured value".
>
> And "1111/10000" means "exactly 1111/10000, from some theoretical
> calculation".

Well, since I pass the test of being human I will offer yet another
interpretation.  When I saw .1111 the first time a few posts back I saw 1/9
among the other suggested interpretations.

Is this Wrong with a capital W?  No, it is not because data is nothing 
without context.

The basic fallacy of the "don't double-float my boat" camp is that a number, 
absent application context, is as meaningless as the ASCII codes of the 
characters in its representation.  There is no reason, absent that context, 
to believe that the best interpretation of .123 is 123/100.  Not only that, 
I completely reject the notion that a naive human who types at a terminal 
".123" most probably means 123/100.  I suggest that in ~99/100% of 
non-computer scientist cases, the naive user does not even know what they 
really mean and in nearly as many, would not care.  In computer professional 
cases it may well be quite high as well.  Even if not, it is very probable 
that in 90% of cases it will not matter.

As for the smaller number of times that it does matter, no amount of 
machinary can remove the obligation from the programmer to use the correct 
programming techniques nor from the user to understand the context of the 
numbers they read.

I am a bit mystified as to why the other side of this equation is being 
ignored, that is the writing part of reading and writing data.  If your 
program produces 123/100 and you write it to a file as .123, then this is 
where the bug is hatched.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: David Steuber
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87sm495njw.fsf@david-steuber.com>
"Coby Beck" <·····@mercury.bc.ca> writes:

> I am a bit mystified as to why the other side of this equation is being 
> ignored, that is the writing part of reading and writing data.  If your 
> program produces 123/100 and you write it to a file as .123, then this is 
> where the bug is hatched.

Clearly this is a nasty bug.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4206865c$0$1007$626a14ce@news.free.fr>
"Coby Beck" <·····@mercury.bc.ca> a �crit dans le message de news: 
····················@edtnps91...
>
> Well, since I pass the test of being human I will offer yet another
> interpretation.  When I saw .1111 the first time a few posts back I saw 
> 1/9
> among the other suggested interpretations.
>
> Is this Wrong with a capital W?  No, it is not because data is nothing 
> without context.
> The basic fallacy of the "don't double-float my boat" camp is that a 
> number, absent application context, is as meaningless as the ASCII codes 
> of the characters in its representation.  There is no reason, absent that 
> context, to believe that the best interpretation of .123 is 123/100.

Yes and No. When you don't specify context the default one apply so there is 
a context.

Now, what is the default context when humans see ".12455". I argue that it 
means 12455/100000. I even argue that for the most part, programmers it 
means the same thing. And even for float intensive users, this means that 
when they see it on a price ticket ;)

>  Not only that, I completely reject the notion that a naive human who 
> types at a terminal ".123" most probably means 123/100.

You reject evidence : ask your wife if she interprets 0.1 the same as 1/10. 
I bet the answer is yes.


>  I suggest that in ~99/100% of

Do you mean 99% ??? i suspect a typo there which is important for the rest 
of the comprehension.

> non-computer scientist cases, the naive user does not even know what they 
> really mean and in nearly as many, would not care.  In computer 
> professional cases it may well be quite high as well.  Even if not, it is 
> very probable that in 90% of cases it will not matter.
>
> As for the smaller number of times that it does matter, no amount of 
> machinary can remove the obligation from the programmer to use the correct 
> programming techniques nor from the user to understand the context of the 
> numbers they read.
>
> I am a bit mystified as to why the other side of this equation is being 
> ignored, that is the writing part of reading and writing data.  If your 
> program produces 123/100 and you write it to a file as .123, then this is 
> where the bug is hatched.

"0.123" is the more human common way to write 123/100. No bugs there.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: David Steuber
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <1107745674.048483.60190@z14g2000cwz.googlegroups.com>
Christophe Turle wrote:
> "Coby Beck" <·····@mercury.bc.ca> a écrit dans le message de news:
> ····················@edtnps91...
> >
> > I am a bit mystified as to why the other side of this equation is
being
> > ignored, that is the writing part of reading and writing data.  If
your
> > program produces 123/100 and you write it to a file as .123, then
this is
> > where the bug is hatched.
>
> "0.123" is the more human common way to write 123/100. No bugs there.

Are you absolutely sure about that?  Read it again :-)
From: Barry Margolin
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <barmar-2B3709.21153406022005@comcast.dca.giganews.com>
In article <························@news.free.fr>,
 "Christophe Turle" <······@nospam.com> wrote:

> "0.123" is the more human common way to write 123/100. No bugs there.

Except for the factor-of-10 error. :)

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Coby Beck
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <tOwNd.14614$tU6.5501@edtnps91>
"Christophe Turle" <······@nospam.com> wrote in message 
·····························@news.free.fr...
> "Coby Beck" <·····@mercury.bc.ca> a �crit dans le message de news: 
> ····················@edtnps91...
>> of the characters in its representation.  There is no reason, absent that 
>> context, to believe that the best interpretation of .123 is 123/100.
>
> Yes and No. When you don't specify context the default one apply so there 
> is a context.

Sorry, there is no such thing as "default context" for any number.  There is 
only "context" which may or may not be known and it is always a dangerous 
thing to assume something that is not known.  This may well be a requirement 
of an application, but should not be a feature of a programming language or 
an implementation thereof.  So what is left, if we can not assume a context? 
Specify the behaviour.  This specification will necessarily be arbitrary to 
some degree, that can not be helped.

If you now approach the problem from this perspective, you may find your 
judgements as to the relative merits of history, industry norms, 
implementation details and user preferences (expectations?) somewhat 
altered.

> Now, what is the default context when humans see ".12455". I argue that it 
> means 12455/100000. I even argue that for the most part, programmers it 
> means the same thing. And even for float intensive users, this means that 
> when they see it on a price ticket ;)

So you assert, others seem to disagree.  You must admit it is a subjective 
assertion, absent some kind of very complex polling process at least.

>>  Not only that, I completely reject the notion that a naive human who 
>> types at a terminal ".123" most probably means 123/100.
>
> You reject evidence : ask your wife if she interprets 0.1 the same as 
> 1/10. I bet the answer is yes.

I'll pretend I have a wife and she said yes.  I say yes, too.  Now I ask her 
is .33 the same as 1/3.  She again said yes.  I said, "well, that depends." 
Now I asked her does she want a computer programming language to interprete 
.1111 as 1111/10000 or as some implementation dependant floating-point 
aproximation of 1.111x10e-4.  I said 1.111x10e-4 and she told me to leave 
her alone.

But this is anecdotal, not useful evidence of anything.

>>  I suggest that in ~99/100% of
>
> Do you mean 99% ??? i suspect a typo there which is important for the rest 
> of the comprehension.

yes, sorry.

>> non-computer scientist cases, the naive user does not even know what they 
>> really mean and in nearly as many, would not care.  In computer 
>> professional cases it may well be quite high as well.  Even if not, it is 
>> very probable that in 90% of cases it will not matter.
>>
>> As for the smaller number of times that it does matter, no amount of 
>> machinary can remove the obligation from the programmer to use the 
>> correct programming techniques nor from the user to understand the 
>> context of the numbers they read.
>>
>> I am a bit mystified as to why the other side of this equation is being 
>> ignored, that is the writing part of reading and writing data.  If your 
>> program produces 123/100 and you write it to a file as .123, then this is 
>> where the bug is hatched.
>
> "0.123" is the more human common way to write 123/100. No bugs there.

You just ignored my point, btw.  In the context of ANSI CL as it now is, if 
you write a program that puts .123 in a file when the answer was 123/100 and 
it is then read by another program as .123d0 you do so have a bug.  My point 
is that the bug is in the program that wrote the file.

BTW, .123 is also the common human way to write .1230000563372 or 
.12299999999 and if I am writing a cheque then it would be .12

Is 123/1000 = 12300/100000?  Does a human expect something different from 
.123 and .12300?  Not the naive human you want to satisfy, I daresay.  My 
pretend wife says the are the same.  Lisp says there the same.  A chemist 
probably would not.

Computer languages are for computers and a small subset of humans who 
program them, no one else.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <u8y618gpi.fsf@nhplace.com>
"Christophe Turle" <······@nospam.com> writes:

> "0.123" is the more human common way to write 123/100. No bugs there.

Geez, and here I thought 1.23 was the more human, common way to write 123/100. ;)
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4206a437$0$950$626a14ce@news.free.fr>
"Kent M Pitman" <······@nhplace.com> a �crit dans le message de news: 
·············@nhplace.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "0.123" is the more human common way to write 123/100. No bugs there.
>
> Geez, and here I thought 1.23 was the more human, common way to write 
> 123/100. ;)


Since the number 0.123 (or 123/1000) was in my head at the time i have wrote 
this, i'm more and more inclined to think that 0.123 is really a better 
notation for rationals ;-)


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <42064a93$0$938$626a14ce@news.free.fr>
"Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
··············@cam.ac.uk...
> "Christophe Turle" <······@nospam.com> writes:
>
>> I agree. And the context for Lisp should be compatible with human 
>> context.
>> default float interpretation is not compatible with human reading. That's 
>> my
>> point.
>
> If that's your point, it's not a very good one.  I am a human (I know,
> I know, I could be a dog); I read Lisp quite a lot, and I can
> interpret the notation of tokens as floating point numbers.

'human context' means humans in general. How many % of documents written by 
humans which include 0.1111111 to be interpreted as a float ?

And even within programmers, what is the % of them that really need the 
knowledge of float if they were given the rational choice ? the vast 
majority of Applications don't need float to be written. It is just usefull 
at the optimization stage.

> I actually don't believe that the solution to the "Na�ve Reader"
> problem is to force everyone to descend to the level of Most Common
> Na�vet�;

not "naive reader", right abstract level reader.

> instead, I'd suggest that the na�ve reader use this to
> educate themselves.

And why not TV users knowing about quantum theory ? ok it's extreme, but it 
is to better show my point ;-)


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uhdkpsir8.fsf@news.dtpq.com>
Can you show us some code in the other programming language(s) 
to which you are referring in which numbers with a decimal point 
do not indicate a floating point number?

By the way, what TV channel frequencies are you being handed
that are not conveniently represented by floating point?
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <420670c4$0$958$626a14ce@news.free.fr>
"Christopher C. Stacy" <······@news.dtpq.com> a �crit dans le message de 
news: ·············@news.dtpq.com...
> Can you show us some code in the other programming language(s)
> to which you are referring in which numbers with a decimal point
> do not indicate a floating point number?

Perhaps some exists, but i don't know about. But it's not a sufficient 
reason why it should be like this. Reasons must be objective ones not 
'follow others' ones.


> By the way, what TV channel frequencies are you being handed
> that are not conveniently represented by floating point?

(rationalize 201.0053) ; in MHz
=> 37990/189

It works with double float. But the point is : why MUST i care about float 
type EVEN if i'm not concerned with optimization ???


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Peter Seibel
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3zmyhcwf4.fsf@javamonkey.com>
"Christophe Turle" <······@nospam.com> writes:

> "Christopher C. Stacy" <······@news.dtpq.com> a �crit dans le

>> By the way, what TV channel frequencies are you being handed
>> that are not conveniently represented by floating point?
>
> (rationalize 201.0053) ; in MHz
> => 37990/189

So what's the problem?

  (float (rationalize 201.0053)) ==> 201.0053

And you do know about this, right:

  (let ((*read-default-float-format* 'double-float))
    (rationalize (read-from-string "201.0053"))) ==> 2010053/10000

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <42067a7c$0$957$626a14ce@news.free.fr>
"Peter Seibel" <·····@javamonkey.com> a �crit dans le message de news: 
··············@javamonkey.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "Christopher C. Stacy" <······@news.dtpq.com> a �crit dans le
>
>>> By the way, what TV channel frequencies are you being handed
>>> that are not conveniently represented by floating point?
>>
>> (rationalize 201.0053) ; in MHz
>> => 37990/189
>
> So what's the problem?
>
>  (float (rationalize 201.0053)) ==> 201.0053

the problem is that i want to print the value as a HZ value so :

(my-print-fct (read-from-string "201.0053"))
=> "201005300"

without float burden it should have simply been :

(defun my-print-fct ( in-Mhz )
  (format nil "~d" (* 1000000 in-Mhz)) )

i'm waiting for your solution.


>
> And you do know about this, right:
>
>  (let ((*read-default-float-format* 'double-float))
>    (rationalize (read-from-string "201.0053"))) ==> 2010053/10000
>

You cheat there ! what you do is taking a more precise float type, so :

- you use float as a rational.
- an other input value may not work (of course one not from your test 
cases). And there begin hidden errors hard to detect... So using 
rationalize/double-float won't be taken as a valid solution for my-print-fct 
;)


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Marc Battyani
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <cu5uup$37t@library2.airnews.net>
"Christophe Turle" <······@nospam.com> wrote

> the problem is that i want to print the value as a HZ value so :
>
> (my-print-fct (read-from-string "201.0053"))
> => "201005300"
>
> without float burden it should have simply been :
>
> (defun my-print-fct ( in-Mhz )
>   (format nil "~d" (* 1000000 in-Mhz)) )

I didn't followed this thread so may be out of context but I would do this:
(format nil "~d" (round 201.0053 0.000001))
=>"201005300"

Assuming you want an integer number of Hz

Marc
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <42068229$0$1013$626a14ce@news.free.fr>
"Marc Battyani" <·············@fractalconcept.com> a �crit dans le message 
de news: ··········@library2.airnews.net...
>
> "Christophe Turle" <······@nospam.com> wrote
>
>> the problem is that i want to print the value as a HZ value so :
>>
>> (my-print-fct (read-from-string "201.0053"))
>> => "201005300"
>>
>> without float burden it should have simply been :
>>
>> (defun my-print-fct ( in-Mhz )
>>   (format nil "~d" (* 1000000 in-Mhz)) )
>
> I didn't followed this thread so may be out of context but I would do 
> this:
> (format nil "~d" (round 201.0053 0.000001))
> =>"201005300"
>

with CLisp 2.33.1, it gives :

CL-USER> (format nil "~d" (round 201.0053 0.000001))
"201005296"



I hope my version of clisp is buggy with additions of mine, else it proves 
that even for simple things like this, errors are easy with floats.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Marc Battyani
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <cu61d0$rii@library2.airnews.net>
"Christophe Turle" <······@nospam.com> wrote
> > I didn't followed this thread so may be out of context but I would do
> > this:
> > (format nil "~d" (round 201.0053 0.000001))
> > =>"201005300"
> >
>
> with CLisp 2.33.1, it gives :
>
> CL-USER> (format nil "~d" (round 201.0053 0.000001))
> "201005296"
>
>
> I hope my version of clisp is buggy with additions of mine, else it proves
> that even for simple things like this, errors are easy with floats.

Have you looked at *read-default-float-format* (and set it to 'double-float)
as Peter suggested ?

Or else if your tables have 4 digits after the decimal point then round at
that value:

(format nil "~d" (* 100 (round 201.0053 0.0001)))
=>"201005300"

(format nil "~d" (* 100 (round 201.005296 0.0001)))
=>"201005300"

Marc
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <420690f6$0$1211$626a14ce@news.free.fr>
"Marc Battyani" <·············@fractalconcept.com> a �crit dans le message 
de news: ··········@library2.airnews.net...
>
> "Christophe Turle" <······@nospam.com> wrote
>> > I didn't followed this thread so may be out of context but I would do
>> > this:
>> > (format nil "~d" (round 201.0053 0.000001))
>> > =>"201005300"
>> >
>>
>> with CLisp 2.33.1, it gives :
>>
>> CL-USER> (format nil "~d" (round 201.0053 0.000001))
>> "201005296"
>>
>>
>> I hope my version of clisp is buggy with additions of mine, else it 
>> proves
>> that even for simple things like this, errors are easy with floats.
>
> Have you looked at *read-default-float-format* (and set it to 
> 'double-float)
> as Peter suggested ?

Changing the float precision just push the general problem farther. One day, 
one number out of float precision will come and errors will begin to appear 
...

No escape with float. The solution is to hack the reader (using macro 
characters as suggested) to capture rational values.

> Or else if your tables have 4 digits after the decimal point then round at
> that value:
>
> (format nil "~d" (* 100 (round 201.0053 0.0001)))
> =>"201005300"
>
> (format nil "~d" (* 100 (round 201.005296 0.0001)))
> =>"201005300"
>

You give me the staff to beat you ;)

I think that 201.005296 MHz has to be interpreted as 201005296 Hz not 
201005300 Hz


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Coby Beck
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <F9xNd.14633$tU6.7974@edtnps91>
"Christophe Turle" <······@nospam.com> wrote in message 
·····························@news.free.fr...
> No escape with float. The solution is to hack the reader (using macro 
> characters as suggested) to capture rational values.

No comment on the reading part, but it seems to me that a more apropriate 
abstraction would be to capture Hz, best represented as an integer (if that 
is precise enough?) and then format it as you chose.


-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Marc Battyani
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <cu672n$74p@library2.airnews.net>
"Christophe Turle" <······@nospam.com> wrote:

> I think that 201.005296 MHz has to be interpreted as 201005296 Hz not
> 201005300 Hz

No, you are talking radio channels and telecom crystals are typically at 10
ppm. That's 10e-5, so 5 digits are enough. So with a double-float you are
safe. ;-)

Marc
(I just finished to design a 2.4GHz radio system)
From: Christophe Rhodes
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <sqk6pkeib6.fsf@cam.ac.uk>
"Christophe Turle" <······@nospam.com> writes:

> You give me the staff to beat you ;)
>
> I think that 201.005296 MHz has to be interpreted as 201005296 Hz not 
> 201005300 Hz

As well as Marc's point that typical crystal frequency tolerance is
about 10 parts per million, don't forget that to send a useful signal
your signal with base frequency of 201.005296 MHz needs to be
modulated, which implies that your channel will have a certain
bandwidth.

Let's say that you're interested in transmitting speech-quality data,
so a modulating frequency of about 22kHz is adequate -- if you want
music or video, obviously you'll need a higher modulating frequency,
and probably your transmitter won't lie in the MHz range.  Then your
band actually lies between 200.083296 MHz and 201.027296 MHz.

Given this, do you really think it's sensible to require your
transmitting frequency to be printed and treated as exactly 201.005296
MHz?  Or would 201.0 perhaps be a better representation?  (If you're
sending less dense information than speech, then you might be
interested in one or two more significant figures -- but not very many
more, because of the limit imposed by your transmitter.)

Christophe
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4207dbfc$0$490$626a14ce@news.free.fr>
"Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
··············@cam.ac.uk...
> Given this, do you really think it's sensible to require your
> transmitting frequency to be printed and treated as exactly 201.005296
> MHz?

Yes it is. Because this value will be sent to end users. And i'm sure that 
they want to see the exact value printed on their screen.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Barry Margolin
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <barmar-A702D9.21330407022005@comcast.dca.giganews.com>
In article <·······················@news.free.fr>,
 "Christophe Turle" <······@nospam.com> wrote:

> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
> ··············@cam.ac.uk...
> > Given this, do you really think it's sensible to require your
> > transmitting frequency to be printed and treated as exactly 201.005296
> > MHz?
> 
> Yes it is. Because this value will be sent to end users. And i'm sure that 
> they want to see the exact value printed on their screen.

I don't know much about your application, but do you even need to treat 
these things as numbers?  Do you actually do arithmetic on frequencies, 
or do you just read them in and present them to end users?  If the 
latter, just parse them as strings.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Coby Beck
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <9IWNd.25591$tU6.92@edtnps91>
"Barry Margolin" <······@alum.mit.edu> wrote in message 
·································@comcast.dca.giganews.com...
> In article <·······················@news.free.fr>,
> "Christophe Turle" <······@nospam.com> wrote:
>
>> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news:
>> ··············@cam.ac.uk...
>> > Given this, do you really think it's sensible to require your
>> > transmitting frequency to be printed and treated as exactly 201.005296
>> > MHz?
>>
>> Yes it is. Because this value will be sent to end users. And i'm sure 
>> that
>> they want to see the exact value printed on their screen.
>
> I don't know much about your application, but do you even need to treat
> these things as numbers?  Do you actually do arithmetic on frequencies,
> or do you just read them in and present them to end users?  If the
> latter, just parse them as strings.

The same thought occured to me.  Hence another entry for my multiple choice 
question: ".1111"

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Rob Warnock
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4PWdnUJf7bd015XfRVn-3w@speakeasy.net>
Christophe Rhodes  <·····@cam.ac.uk> wrote:
+---------------
| Let's say that you're interested in transmitting speech-quality data,
| so a modulating frequency of about 22kHz is adequate -- if you want
| music or video, obviously you'll need a higher modulating frequency,
| and probably your transmitter won't lie in the MHz range.  Then your
| band actually lies between 200.083296 MHz and 201.027296 MHz.
| 
| Given this, do you really think it's sensible to require your
| transmitting frequency to be printed and treated as exactly 201.005296
| MHz?  Or would 201.0 perhaps be a better representation?  (If you're
| sending less dense information than speech, then you might be
| interested in one or two more significant figures -- but not very many
| more, because of the limit imposed by your transmitter.)
+---------------

But if you're using narrow-band SSB with 6 kHz channel separation
[which some miltary radios do], then you'd *better* be carrying
6 or 7 digits of precision, 'cuz if you drift from 200.083296 MHz
to 201.027296 MHz, you've just crossed 15 or 16 other conversations!!

Even in amateur radio using (DSB) AM or FM, channel separations
of 25 kHZ at frequencies of hundreds of MHz is not unusual, e.g.:

    <http://solair.eunet.yu/~yu1arl/bp-432.htm>
    2.2(i) On a temporary basis, in those countries where
    433.619-433.781 MHz is the only segment of the 435 MHz band
    available for Digital Communications:

    1. Channels with centre frequencies 433.700, 432.725, 432.750,
       432.775, 434.450, 434.475, 434.500, 434.525, 434.550 and 434.575
       may be used for digital communications.
    2. Use of these channels must nor interfere with linear transponders.
    3. Modulation techniques requiring a channel separation exceeding
       25 kHz must not be used on these channels. 

And for clear separation, the center frequencies must be within +/- 500 Hz.
So, yes, modern radios must use *much* more precise frequencies than
commonly used earlier...


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Thomas A. Russ
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <ymiy8e01azj.fsf@sevak.isi.edu>
"Christophe Turle" <······@nospam.com> writes:
> the problem is that i want to print the value as a HZ value so :
> 
> (my-print-fct (read-from-string "201.0053"))
> => "201005300"
> 
> without float burden it should have simply been :
> 
> (defun my-print-fct ( in-Mhz )
>   (format nil "~d" (* 1000000 in-Mhz)) )
> 
> i'm waiting for your solution.

This is a different problem.
Look at the reference to the MEASURES package.

If you want an updated version of that that I've worked on a bit, it is
also packaged up with our Loom package (but you can just extract the
measures part of it).  IIRC there is even an option to treat all values
as rational numbers.  If not, it could easily be added to the reader...

Original Measures:  http://tinyurl.com/5lppy
In Loom:  http://www.isi.edu/isd/LOOM/how-to-get.html


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4207de20$0$485$626a14ce@news.free.fr>
"Thomas A. Russ" <···@sevak.isi.edu> a �crit dans le message de news: 
···············@sevak.isi.edu...
> "Christophe Turle" <······@nospam.com> writes:
>> the problem is that i want to print the value as a HZ value so :
>>
>> (my-print-fct (read-from-string "201.0053"))
>> => "201005300"
>>
>> without float burden it should have simply been :
>>
>> (defun my-print-fct ( in-Mhz )
>>   (format nil "~d" (* 1000000 in-Mhz)) )
>>
>> i'm waiting for your solution.
>
> This is a different problem.
> Look at the reference to the MEASURES package.
>
> If you want an updated version of that that I've worked on a bit, it is
> also packaged up with our Loom package (but you can just extract the
> measures part of it).  IIRC there is even an option to treat all values
> as rational numbers.  If not, it could easily be added to the reader...
>
> Original Measures:  http://tinyurl.com/5lppy
> In Loom:  http://www.isi.edu/isd/LOOM/how-to-get.html

Thx. Having some roots in AI, i will gladly check it ;)



-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uu0op4fxy.fsf@nhplace.com>
"Christophe Turle" <······@nospam.com> writes:

> "Christopher C. Stacy" <······@news.dtpq.com> a �crit dans le message de 
> news: ·············@news.dtpq.com...
> > Can you show us some code in the other programming language(s)
> > to which you are referring in which numbers with a decimal point
> > do not indicate a floating point number?
> 
> Perhaps some exists, but i don't know about. But it's not a sufficient 
> reason why it should be like this. Reasons must be objective ones not 
> 'follow others' ones.

Fair enough.  Please just give an "objective" description of the term
"objective" and we'll be all set.

One might, for example, argue that objective and subjective differ
only in that the subjectivity in the former is embedded so
fundamentally in the design of the universe that it is no longer
subject to change.  That is, certain numeric constants and
relationships may have at one point in the design of the Universe been
variables, but once other things were nailed down, they no longer
could be.

Floating point is very close to this in the computer world, given that
there is floating point hardware on nearly every machine, and decades of
history involving thousands, perhaps even millions, of programmers who
are familiar with the "meaning" of 0.11111 in computerese.

As such, I would take Chris's question as an attempt to be objective,
that is, to say that the existing mechanism and notation serves a nearly
ubiquitous aspect of both notation and implementation that transcends
programming languages and is not likely to change any time in the next
year or two, nor probably much longer.  That's a good floating point 
approximation to "objective" in my book...
 
> > By the way, what TV channel frequencies are you being handed
> > that are not conveniently represented by floating point?
> 
> (rationalize 201.0053) ; in MHz
> => 37990/189
> 
> It works with double float. But the point is : why MUST i care about float 
> type EVEN if i'm not concerned with optimization ???

You don't have to at all.  If you're not concerned about optimization, 
you're free to write and advertise a BCD library with appropriate readers
and/or readmacros.

What you ARE obliged to do is to write code that is not there by default.
Lisp enables you to do many things, but one of the things you are not enabled
to do is to wish for things and have them come into existence without 
any work.

Maybe you just want Scheme.  In Scheme, they have an extra dimension
of notational terminology for numbers--exactness vs inexactness
(orthogonally to floating point vs integer, such that you can have
exact floats and inexact integers).
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <X82dna2XWdgHHpvfRVn-sQ@golden.net>
Kent M Pitman wrote:
> Floating point is very close to [being fundamentally embedded] in the
> computer world, given that there is floating point hardware on nearly
> every machine, and decades of history involving thousands, perhaps
> even millions, of programmers who are familiar with the "meaning" of
> 0.11111 in computerese.

I suspect that of all the people who have ever written floating point
code (and here I'll be generous and exclude mere spreadsheet users),
fewer than half of them even realized that the representation was
inexact, and perhaps one in fifty understood enough to be wary of the
pitfalls.

There was perhaps a time when the majority of users of general purpose
FP hardware wanted inexact math without error bounds calculation by
default -- i.e they were willing to live with the occasional need for
numerical analysis and the odd model blowing up because they couldn't
afford the cycles or the gates to compute cumulative errors along with
their shady numbers. That time is long past, I think.

> What you ARE obliged to do is to write code that is not there by
> default. Lisp enables you to do many things, but one of the things
> you are not enabled to do is to wish for things and have them come
> into existence without any work.

True... That's what c.l.l is for! The original poster got helpful code
and suggestions, but also, disturbingly, comments to the effect that
yes, Lisp is a programmable programming language, but you shouldn't
program it THAT way.
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <ud5vd8grv.fsf@nhplace.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> Kent M Pitman wrote:
> > Floating point is very close to [being fundamentally embedded] in the
> > computer world, given that there is floating point hardware on nearly
> > every machine, and decades of history involving thousands, perhaps
> > even millions, of programmers who are familiar with the "meaning" of
> > 0.11111 in computerese.
> 
> I suspect that of all the people who have ever written floating point
> code (and here I'll be generous and exclude mere spreadsheet users),
> fewer than half of them even realized that the representation was
> inexact, and perhaps one in fifty understood enough to be wary of the
> pitfalls.

Definitely.  I have made this point many times.  I myself use floating
point only for the most simple-minded of uses, and am frequently heard
to say (much to the consternation of my colleagues) that I "don't
understand" floating point.  By which I usually mean "I understand
just enough about the odd-shaped space of points its baroque
representation designates to know I'm probably very bad at having any
intuitive understanding of its moment-to-moment variations in
precision as data changes from things that are more precisely
represented to things that are less precisely so".  And since most
people don't seem to understand what I mean by that, I usually resort
to just saying I don't understand it at all, which seems to make them
think they understand better.  Heh.
 
I have often also compared floats to LOOP in this regard.  Deceptively
simple in how it draws you in, but you sometimes have to be an expert
in order to know for sure whether what seems to make sense in English
in fact means what it looks like it means.  As such, it appeals to almost
exactly the segment of the user base for which it is not intended.  I've
mellowed somewhat in my feelings about LOOP over the years, but not so
much in my feelings about floats...

> There was perhaps a time when the majority of users of general purpose
> FP hardware wanted inexact math without error bounds calculation by
> default -- i.e they were willing to live with the occasional need for
> numerical analysis and the odd model blowing up because they couldn't
> afford the cycles or the gates to compute cumulative errors along with
> their shady numbers. That time is long past, I think.

A reasonable point is made here, so let me back up slightly to at least 
this extent:

I once made a similar case about how the time had passed for Lisp to
start up in base 8, and how "modern users" all expected "base 10".  At
this point, I won't try to embarrass anyone you may revere by giving
the long list of names of people who pooh-poohed the idea as
ridiculous and who argued seriously that base 8 was better and users
just didn't know what was good for them.  I was, for a while, in a
minority among the designers, who lagged the users in the obvious
truth.

I won't claim that situations like you describe don't happen.

But the question is, as much, whose responsibility it is to be taking
the step forward into the brave new word you describe.

I could equally well make the case that rather than give us 6GHz
processor chips for Christmas, we might appreciate 3GHz chips with a
hardware BCD feature instead.  And then ALL languages could give some
thought to whether to switch over.

I don't think the Lisp community's user base is strong enough to lead
the pack into an enlightened future that forces hardware support, and
absent hardware support, I think we'll just be laughed at as a toy.
We HAVE support for rationals, which is better than most languages.
But I don't see a pressing need to abandon what little syntax
compatibility we have with other languages ... we went to a lot of
trouble to make the float syntax, the FORMAT ops, etc. be as
compatible as possible with IEEE, with the Fortran language, etc.

If it's time for the change you're asking for, it's up to IEEE to do
it.  I won't disagree with a claim that with processors getting faster
and faster, almost to the point of uselessness, it's time for us to
put in some onboard support for things the common person has needed
for ages.  I just think, if anything, that fixing it at the
superficial level and not at the underlying level, is reducing
pressure on the proper authorities to fix it right.

I have made a similar process argument, by the way, about how
recycling of waste materials should work in my town.  Some of my
friends are willing to drive to the dump, but I always tell them they
shouldn't.  They should hold out for door-to-door service.  They
dillute and use up their precious interest in this topic by the time
they spend driving across town to put in their petty amounts of
trash. And they consider the problem fixed for themselves, and sit
waiting for "the masses" to get with it and do the same.  But the
masses are lazy, and no such end comes.  If they instead they insisted
on being lazy and just didn't consider the problem solved until
EVERYONE had recycling pickup from their door, they (a) wouldn't have
to drive their trash to the dump and (b) wouldn't use up the time they
have to devote on the matter in solving the problem imperfectly.  By
being lazy, they force a solution that even suits lazy people.  If
they're bugged, let them use that time they had to drive across town
being on the phone to a Congressperson lobbying for change.  Because a
solution to THAT problem will serve everyone, even the lazy people,
and will bring orders of magnitude larger returns.

So here I sit, following my carefully crafted "lazy is better"(TM)
strategy to get IEEE to fix the problem.  The last thing we need is a
place for people who are annoyed to take happy refuge and stop
bothering the chipmakers and standardsmakers who could make a REAL
difference.... ;)
From: Coby Beck
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <A4xNd.14630$tU6.3244@edtnps91>
"Cameron MacKinnon" <··········@clearspot.net> wrote in message 
···························@golden.net...
>> What you ARE obliged to do is to write code that is not there by
>> default. Lisp enables you to do many things, but one of the things
>> you are not enabled to do is to wish for things and have them come
>> into existence without any work.
>
> True... That's what c.l.l is for! The original poster got helpful code
> and suggestions, but also, disturbingly, comments to the effect that
> yes, Lisp is a programmable programming language, but you shouldn't
> program it THAT way.

I don't think that is a fair comment.  What you got was people saying how 
the problem could be soved and disagreeing with the suggestion that there is 
something broken in the design of the language.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87r7joib2c.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> True... That's what c.l.l is for! The original poster got helpful code
> and suggestions, but also, disturbingly, comments to the effect that
> yes, Lisp is a programmable programming language, but you shouldn't
> program it THAT way.

No, he is claiming that CL is wrongly specified. We are claiming that it
is specified in a way that makes sense for the majority of its users. 

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <42067ff2$0$946$626a14ce@news.free.fr>
"Kent M Pitman" <······@nhplace.com> a �crit dans le message de news: 
·············@nhplace.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "Christopher C. Stacy" <······@news.dtpq.com> a �crit dans le message de
>> news: ·············@news.dtpq.com...
>> > Can you show us some code in the other programming language(s)
>> > to which you are referring in which numbers with a decimal point
>> > do not indicate a floating point number?
>>
>> Perhaps some exists, but i don't know about. But it's not a sufficient
>> reason why it should be like this. Reasons must be objective ones not
>> 'follow others' ones.
>
> Fair enough.  Please just give an "objective" description of the term
> "objective" and we'll be all set.

It is sufficient to say that we agree that this reason was NOT objective.

> Floating point is very close to this in the computer world, given that
> there is floating point hardware on nearly every machine,

Better and better. How didn't i think of it sooner ? I have to write my app 
with floats because hardware use them ;)

> and decades of
> history involving thousands, perhaps even millions, of programmers who
> are familiar with the "meaning" of 0.11111 in computerese.

Sorry to don't write buggy, time consuming App because 'millions' of people 
knows float theory. It's not because lots of people know assembler languages 
that i will use them.

> As such, I would take Chris's question as an attempt to be objective,
> that is, to say that the existing mechanism and notation serves a nearly
> ubiquitous aspect of both notation and implementation that transcends
> programming languages and is not likely to change any time in the next
> year or two, nor probably much longer.  That's a good floating point
> approximation to "objective" in my book...

True thing is to know what will win at last not during the next year. 
Natural selection works even for programming language. I just hope we won't 
be dead at this time.

>> > By the way, what TV channel frequencies are you being handed
>> > that are not conveniently represented by floating point?
>>
>> (rationalize 201.0053) ; in MHz
>> => 37990/189
>>
>> It works with double float. But the point is : why MUST i care about 
>> float
>> type EVEN if i'm not concerned with optimization ???
>
> You don't have to at all.  If you're not concerned about optimization,
> you're free to write and advertise a BCD library with appropriate readers
> and/or readmacros.

Yes, surely the way to go.

> Maybe you just want Scheme.  In Scheme, they have an extra dimension
> of notational terminology for numbers--exactness vs inexactness
> (orthogonally to floating point vs integer, such that you can have
> exact floats and inexact integers).

Perhaps, i will look at it.



-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr)
From: Barry Margolin
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <barmar-282AD3.21101506022005@comcast.dca.giganews.com>
In article <·······················@news.free.fr>,
 "Christophe Turle" <······@nospam.com> wrote:

> It works with double float. But the point is : why MUST i care about float 
> type EVEN if i'm not concerned with optimization ???

Because you're not the only programmer, and other programmers have 
concerns different from yours.  The language has to cater to a wide 
variety of needs.  For many of them, floating point is the proper tool, 
and CL provides it, just like most other languages.

If you don't need floating point, don't use it.  But it seems really 
conceited of you to expect that the language should be designed to 
facilitate your specific application rather than others.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4207df28$0$504$626a14ce@news.free.fr>
"Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news: 
····························@comcast.dca.giganews.com...
> In article <·······················@news.free.fr>,
> "Christophe Turle" <······@nospam.com> wrote:
>
>> It works with double float. But the point is : why MUST i care about 
>> float
>> type EVEN if i'm not concerned with optimization ???
>
> Because you're not the only programmer, and other programmers have
> concerns different from yours.  The language has to cater to a wide
> variety of needs.  For many of them, floating point is the proper tool,
> and CL provides it, just like most other languages.
>
> If you don't need floating point, don't use it.  But it seems really
> conceited of you to expect that the language should be designed to
> facilitate your specific application rather than others.

The problem is that you can always convert rational to float but not the 
contrary.

This is the 'objective' ;) reason i think rational should be the default 
interpretation, not because this suit best my needs.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: William D Clinger
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <1107821649.977951.151220@z14g2000cwz.googlegroups.com>
Christophe Turle wrote:
> The problem is that you can always convert rational to float but
> not the contrary.
>
> This is the 'objective' ;) reason i think rational should be the
> default interpretation, not because this suit best my needs.

What you wrote above is not correct.  Not every rational
is representable as a float, and not every float (e.g.
positive infinity, negative zero) is representable as
a rational, so your style of argument does not provide
an objective criterion for deciding which should be the
default interpretation.

*Most* floats are representable as rationals, and you
might wish to argue that Common Lisp's rationals should
be extended so that all floats are representable as
rationals, but you haven't yet made that argument.  To
me, it isn't clear what argument you think you are making.

Will
From: jayessay
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m34qgmkgz8.fsf@rigel.goldenthreadtech.com>
"Christophe Turle" <······@nospam.com> writes:

> The problem is that you can always convert rational to float but not the 
> contrary.

You're dead wrong on this.  Think about it some more.


> This is the 'objective' ;) reason i think rational should be the default 
> interpretation, not because this suit best my needs.

Well, since you are dead wrong on this "objective" reason, what are
you going to do now?


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <42093d72$0$523$626a14ce@news.free.fr>
"jayessay" <······@foo.com> a �crit dans le message de news: 
··············@rigel.goldenthreadtech.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> The problem is that you can always convert rational to float but not the
>> contrary.
>
> You're dead wrong on this.  Think about it some more.

to be concrete :

(F (R "0.1111111111"))

R is the reader, it reads input as float (your default choice say 'RF') or 
as rational(my default choice say 'RR')

F is the application function. As it represents any app, it may want as 
input a rational(FR) or a float(FF).

what i mean by "you can always convert rational to float" is :
I can produce a converter R->F, which converts rational to float, such that 
(FF (R->F (RR input))) will behave like (FF (RF input))
of course (FR (RR input)) works as is.

what i mean by "but not the contrary" is :
I can't produce a converter F->R, which converts float to rational, such 
that (FR (F->R (RF input))) will behave like (FR (RR input))
of course (FF (RF input)) works as is.

>> This is the 'objective' ;) reason i think rational should be the default
>> interpretation, not because this suit best my needs.
>
> Well, since you are dead wrong on this "objective" reason, what are
> you going to do now?

wait your answer and use your F->R in my app ;-)

Christophe Turle.

 
From: jayessay
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3zmyeiqkl.fsf@rigel.goldenthreadtech.com>
"Christophe Turle" <······@nospam.fr> writes:

> "jayessay" <······@foo.com> a �crit dans le message de news: 
> ··············@rigel.goldenthreadtech.com...
> > "Christophe Turle" <······@nospam.com> writes:
> >
> >> The problem is that you can always convert rational to float but not the
> >> contrary.
> >
> > You're dead wrong on this.  Think about it some more.
> 
> to be concrete :
> 
> (F (R "0.1111111111"))
>
> R is the reader, it reads input as float (your default choice say 'RF') or 
> as rational(my default choice say 'RR')
> 
> F is the application function. As it represents any app, it may want as 
> input a rational(FR) or a float(FF).
> 
> what i mean by "you can always convert rational to float" is :
> I can produce a converter R->F, which converts rational to float, such that 
> (FF (R->F (RR input))) will behave like (FF (RF input))
> of course (FR (RR input)) works as is.

Playing Humpty Dumpty isn't going to help you here.  This isn't that
hard, but it appears you need to think about it some more.


> > Well, since you are dead wrong on this "objective" reason, what are
> > you going to do now?
> 
> wait your answer and use your F->R in my app ;-)

Hope it helps...


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <usm48d4qh.fsf@news.dtpq.com>
"Christophe Turle" <······@nospam.com> writes:

> "Christopher C. Stacy" <······@news.dtpq.com> a �crit dans le message de 
> news: ·············@news.dtpq.com...
> > Can you show us some code in the other programming language(s)
> > to which you are referring in which numbers with a decimal point
> > do not indicate a floating point number?
> 
> Perhaps some exists, but i don't know about. But it's not a sufficient 
> reason why it should be like this. Reasons must be objective ones not 
> 'follow others' ones.
> 
> 
> > By the way, what TV channel frequencies are you being handed
> > that are not conveniently represented by floating point?
> 
> (rationalize 201.0053) ; in MHz
> => 37990/189
> 
> It works with double float. But the point is : why MUST i care about float 
> type EVEN if i'm not concerned with optimization ???

Because floating point is how all computer programming languages,
including Lisp, do arithmetic when they see a number with a decimal point.

Lisp does offer an unusual alternative not available in most other
languages, which is rational numbers.  But to use that facility,
you have to write the number in a different way so that Lisp will
know that you don't want to do the normal thing.
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uu0oo8eoj.fsf@nhplace.com>
······@news.dtpq.com (Christopher C. Stacy) writes:

> > It works with double float. But the point is : why MUST i care about float 
> > type EVEN if i'm not concerned with optimization ???
> 
> Because floating point is how all computer programming languages,
> including Lisp, do arithmetic when they see a number with a decimal point.

Right.  And, more to the point, it's how computers do it.

One of the reasons we call them "computer programming languages" is that it's
the language for programming the 
<quotemarks device="fingers" style="Austin Powers">computer</quotemarks>.

And while it is certainly true that hardware can be emulated by program, 
programming is a task that we often leave to the user.  The part the user can't
program is access to the native hardware, and that's what we provide through
floats.  If memory serves, that's precisely what we meant where we used the
phrase "codify existing practice" in the original X3J13 charter document.
See for reference: http://www.nhplace.com/kent/CL/x3j13-86-020.html

Also lost in this conversation:

The job of a standard is not to create practice, but to document it.
If the original complaint were that all programmers everywhere ignore
floating point and writes BCD subroutines to achieve correctness, that
would make a more compelling case as a linguistic criticism.

If the original complaint were even that all vendors offer BCD
substrates and that the language makes it hard to expose these much
more common substrates because it provides a substrate that everyone
wishes would go away, that would als omake a more compelling
criticism.

A footnote:

One of the reasons that standards have, to my mind, somewhat fallen
away in the modern world (left as "history" and no longer the central
focus of all computational activity) is that in the modern pluralistic
state, the needs of the many are, well, many.  People no longer want
to do the global synchronization step of beating everyone else into
the dull stupor that so roughly implements to global agreement.
Rather, they want to be personally empowered to make choices good for
themselves notwithstanding the desires of other people to make other
choices, good or bad.

As such, one of the only notions of "good" that I recognize any more is
"supports individual choice".  Since we already support complete control
of redefining the reader, I think we can tick off the "supports choice" 
box there.  Since we support the most common kind of math (the kind everyone
seems to want on-chip), and since we provide substantial support for 
alternatives besides, not just in terms of rationals but also in terms of
a package system, generic functions, and other tools for permitting language
extension or customization, I think we can tick that off, too.  

So it's not obvious what's being discussed here other than the old theory
of beating people into an unwanted stupor.  So unless there is some 
surprisingly new material on this thread, I think I've said all I have to
say.
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uu0onpqik.fsf@news.dtpq.com>
Kent M Pitman <······@nhplace.com> writes:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> > > It works with double float. But the point is : why MUST i care about float 
> > > type EVEN if i'm not concerned with optimization ???
> > 
> > Because floating point is how all computer programming languages,
> > including Lisp, do arithmetic when they see a number with a decimal point.
> 
> Right.  And, more to the point, it's how computers do it.
> 
> One of the reasons we call them "computer programming languages" is that it's
> the language for programming the 
> <quotemarks device="fingers" style="Austin Powers">computer</quotemarks>.

"I want...one hundred million...billion decimal places of accuracy!"
From: David Steuber
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87mzuh5n9v.fsf@david-steuber.com>
"Christophe Turle" <······@nospam.com> writes:

> 'human context' means humans in general. How many % of documents written by 
> humans which include 0.1111111 to be interpreted as a float ?

As a rough estimate, and WAG, I would say 100%.

> And even within programmers, what is the % of them that really need the 
> knowledge of float if they were given the rational choice ? the vast 
> majority of Applications don't need float to be written. It is just usefull 
> at the optimization stage.

Do you have data for this?  Every time I use floats of whatever
precision it is because I really do need them.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: Dan Muller
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <iNsNd.1860$ng6.427@newssvr17.news.prodigy.com>
"Christophe Turle" <······@nospam.com> writes:

> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
> ··············@cam.ac.uk...
>> "Christophe Turle" <······@nospam.com> writes:
>>
>>> I agree. And the context for Lisp should be compatible with human 
>>> context.
>>> default float interpretation is not compatible with human reading. That's 
>>> my
>>> point.

This is just silly. It's a programming convention that has held over a
long time and in many programming languages. The notation is for
programmers, not end users. It's good that Lisp follows convention in
this particular, it's less surprising for prorgrammers. Just as in any
programming language, if you want different conventions in your user
interface, you provide them by writing code.

>>
>> If that's your point, it's not a very good one.  I am a human (I know,
>> I know, I could be a dog); I read Lisp quite a lot, and I can
>> interpret the notation of tokens as floating point numbers.
>
> 'human context' means humans in general. How many % of documents written by 
> humans which include 0.1111111 to be interpreted as a float ?

Since most non-computer users don't know what "float" is, probably
almost none, of course. The question isn't relevant. But in reading
scientific papers, I would normally assume that numbers written this
way are approximate, unless otherwise specified, which is an
assumption that is similar to assuming a data type of "float".

----
Dan Muller
email: (reverse ···············@20sd89dti")
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <LrmdnUIEgZIQ5JvfRVn-2Q@golden.net>
Dan Muller wrote:

[regarding floating point numbers]

> This is just silly. It's a programming convention that has held over
> a long time and in many programming languages. The notation is for 
> programmers, not end users.

You don't really believe this, do you? The notation convention dates
from when users did their own programming. And the underlying
representation's accuracy and range was obviously designed to represent
physical world quantities rather than the sort of numbers that pure
computer scientists might have a use for. It's not like floating point
is a useful tool that programmers generally use to build numeric
representations whose accuracy and range are appropriate for the problem
at hand. The representation was designed for the end user, back in the
day when the end user was almost always a hard scientist calculating
using imprecise physical world measurements.
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87brasi8xz.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> The representation was designed for the end user, back in the day when
> the end user was almost always a hard scientist calculating using
> imprecise physical world measurements.

Or a game designer who wants to compute where this rocket should be
placed in the next frame. Or how much slippage there is of the tires on
this track at this speed in this weather.

Face it. Games are far more important to computing than some number
theory computation, and games don't need infinite precision. They need
to operate at real-world speed, because if they don't, they'll look even
less realistic than if they get the position of the rocket off by 1mm.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4206758e$0$1007$626a14ce@news.free.fr>
"Dan Muller" <·········@sneakemail.com> a �crit dans le message de news: 
··················@newssvr17.news.prodigy.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news:
>> ··············@cam.ac.uk...
>>> "Christophe Turle" <······@nospam.com> writes:
>>>
>>>> I agree. And the context for Lisp should be compatible with human
>>>> context.
>>>> default float interpretation is not compatible with human reading. 
>>>> That's
>>>> my
>>>> point.
>
> This is just silly.

thx ;)

> It's a programming convention that has held over a
> long time and in many programming languages.

And what ? it is sufficient for you to be convinced. not me.

> The notation is for programmers, not end users.

Even for programmers this is bad choice. From my programming experience 
within different contexts, i have seen that programmers use floats like 
rationals. Of course, this causes some problems which should have been 
avoided if only rational numbers have been at hand.

Don't misunderstand me : float are useful but just for an optimization 
purpose. So with app using lots of number computation, i'm sure that 
programmers have to know how to use floats. But for the rest, and they are 
the large majority of App, you don't need them, if an other choice were 
given. It's why rationals should be the DEFAULT reader choice.


>>> If that's your point, it's not a very good one.  I am a human (I know,
>>> I know, I could be a dog); I read Lisp quite a lot, and I can
>>> interpret the notation of tokens as floating point numbers.
>>
>> 'human context' means humans in general. How many % of documents written 
>> by
>> humans which include 0.1111111 to be interpreted as a float ?
>
> Since most non-computer users don't know what "float" is, probably
> almost none, of course. The question isn't relevant. But in reading
> scientific papers, I would normally assume that numbers written this
> way are approximate, unless otherwise specified, which is an
> assumption that is similar to assuming a data type of "float".

Scientifics programs are not the preponderant part of programs in the world.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <umzugd075.fsf@news.dtpq.com>
"Christophe Turle" <······@nospam.com> writes:
> Even for programmers this is bad choice.

In the realm of general purpose programming with fractions, 
there are three choices: floating point, fixed point, and rationals.
Lisp gives you the two most useful: floating point is for doing fast
calculations, and rationals are for doing exact computations.  
Lisp doesn't offer the one that's limited, inexact, and slow.

Most other languages in use today have chosen to only provide 
floating point.  They do not have a "default", since there is
no choice.  Lisp has a default, and it uses the syntax that is
most compatible with those other languages

Whether that's a "bad choice" depends on what kind of programs
you're writing and how important the optimizations for speed

> Don't misunderstand me : float are useful but just for an optimization 
> purpose. So with app using lots of number computation, i'm sure that 
> programmers have to know how to use floats. But for the rest, and they are 
> the large majority of App, you don't need them, if an other choice were 

I don't know where you are coming up with the "majority" opinion.
Apparently, though, you have been out-voted: floating point has
been placed into all the hardware, and support for fixed-point
(such as string-oriented numerics and BCD, which used to be fairly
popular in hardware) have been removed.

The languages that offer fixed-point require you to annotate your
variables with declarations using words like :"CURRENCY" or "fixed
decimal" or "USAGE COMP-3" or "is delta 0.01 digits 5 range 0.0..99.88".
You can do the equivalent thing in Lisp by writing a function
that takes your input string and converts it into the exact
representation of a rational number.

It would be possible to change the Lisp language to support the
default input numeric format that you want, but that's not happening.
But you have even been shown several ways to kludge it up in the 
existing language, a feat which is inconceivable in the other
programming languages.

Now go implement your solution, using either the annotation approach
used in other languages, or using the hack. Trying to convince a
community of people who are invested in a backwards-compatability,
standards-based, stable programming language that they should change
how this default works is a waste of time, especially since nobody
seems to agree with you about its desirability.

There no reason for Lisp programmers to be interested in writing
literal decimal-point constants representing fixed-point numbers 
in their code.  They would use rational syntax, instead.  And when
doing string processing of someone else's data, such as currency, 
they just write whatever parser (and if necessary, new data types
and methods) that are needed to get the desired numerical results.

If Lisp standards or implementations were going to change something
related to this at all, it would be to better support very large
arrays and block-compilation and memory mapping of unboxed floats.
From: Damien Kick
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <oeevsub3.fsf@email.mot.com>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Whether [or not floating point numbers are] a "bad choice" depends
> on what kind of programs you're writing and how important the
> optimizations for speed [...] floating point has been placed into
> all the hardware [...]

Personally, I'm not at all bothered by needing to type 123/1000 if I
want a gauranteed-to-be-exact representation of 0.123.  I also think
that any number of alternative suggestions offered in this thread
(reader macros, custom reader, etc.) should be perfectly sufficient to
solve the problem of FP approximations.  Perhaps such a alt-reader
would treat "0.123" as "123/1000" while treating "0.123..." and
"1.23e-1"
<http://groups-beta.google.com/group/comp.lang.lisp/msg/8c67513fba65dc3e>
as an FP approximation.

However, I do think, from my n00bish POV, that it is interesting to
contrast the above comments, and any number of other comments in this
thread thumping on the pulpit of
well-that-is-just-how-its-done-in-computers
<http://groups-beta.google.com/group/comp.lang.lisp/msg/c28455918fb44882>,
and the Lisp treatment of floating-point/rational with an eye toward
bignum/fixnum.  The Lisp default treatment for integral types is one
that favors correctness over speed; one has to explicitly declare
fixnums to enable capable compilers to get one to the hardware.
Personally, I'm unaware of any other programming language that makes
bignum-like behavior the default.  Why wouldn't Lisp break the mold
with FP, too, even if it meant some level of software emulation would
be needed?

Again, not that I agree that Lisp "got it wrong", and perhaps this is
overstating the OPs view on the matter, when it specified the current
behavior for floating point numbers.  I just don't think that this
particular appeal to speed and ubiquity is a very strong argument.  Of
course, it was this thread (or was it another one about FP?
anyway...) that inspired me to download maxima so I could have a fancy
Lisp pocket calculator <smile>.
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uy8dzpqre.fsf@news.dtpq.com>
Damien Kick <······@email.mot.com> writes:
> one has to explicitly declare fixnums to 
> enable capable compilers to get one to the hardware.

You don't declare fixnum or floating literals - they manifestly READ
from their syntax as fixnums (or bignums).  But if you're trying to
say that Lisp makes you declare fixnums in order to bypass the usual
dynamic type safety, that is not something unique to fixnums.  
(And if you are writing an expression of just literal floats or
fixnums, the compiler doesn't need any declarations.)

The chain of logic is that programmers usually really do want floats
when they write numbers with decimal points.  Other languages use that
float syntax. Supporting evidence is that floats are desirable enough
to have been enshrined in the the hardware.  Lisp is being compatible
with the syntax of all those other languages.  That you may then
possibly have to declare some variables as floats (as is required
in the other languages all of the the time) in order to get the full
optimization is orthogonal to your desire for floats and their syntax.
From: dkixk
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <1107888426.263789.113080@o13g2000cwo.googlegroups.com>
> You don't declare fixnum or floating literals - they manifestly READ
> from their syntax as fixnums (or bignums).  But if you're trying to
> say that Lisp makes you declare fixnums in order to bypass the usual
> dynamic type safety, that is not something unique to fixnums.  [...]

Whether or not declarations can be used to effect the type of an
object created by READing a literal is a red herring.  The point I was
trying to make with bignum/fixnum is that the default behavior of Lisp
for integer arithmetic is not the "usual" behavior; e.g., the fact
that (1+ most-positive-fixnum) gives the correct answer, assuming no
declarations.  Lisp does quite a bit of extra work to ensure the
correctness of this result even given the cost of the performance
penalty for the general case of "+".  If one wants to get the "normal"
behavior, one has to make an extra effort to get speed for the price
inaccuracy and, AFAIK, the Lisp standard doesn't even gaurantee a way
to get the "normal" behavior, e.g. whether or not declarations get one
to the hardware instructions is implementation dependent.

> The chain of logic is that programmers usually really do want floats
> when they write numbers with decimal points. [...]

Well, I don't think that this statement that all programmers (or even
most?) even know enough about FP to really be able to want can be
assumed.  Without some kind of a study/survery, it's basically
conjecture.  Of course, "ignorance of the law is no defense" and I
wouldn't want Lisp to cater to the least common denomintor, so...

> [...] Other languages use that float syntax.  Supporting evidence is
> that floats are desirable enough to have been enshrined in the the
> hardware.  [...] lisp is being compatible with the syntax of all
> those other languages.

Yes, Lisp is being compatible for floating point (syntax and
semantics), more or less for integers (syntax but not semantics), and
it is the only language of which I'm aware that has builtin support
for fractions, either syntacitally or semantically.  So far Lisp seems
to be 1 for 3 (or should that be 0.3333...? <smile>) on the
compatibility of numerics.

> That you may then possibly have to declare some variables as floats
> (as is required in the other languages all of the the time) in order
> to get the full optimization is orthogonal to your desire for floats
> and their syntax.

Well, again, my own personal desire for floats <pause> I'm just fine
with the way things in Lisp currently are.  I had a numerical analysis
class at university so I've known to be wary of floating point numbers
ever since then.  I do not fault Lisp for doing the standard thing
with numeric literals with decimal points in them, i.e. FP treatment,
nor do I think that I could do a better job.  I am not a specialist in
numerical analysis and I doubt I even know all of the tenable options.
I'm quite happy to have fractions for exact computations.  But won't
you at least concede that Lisp does not simply do the standard thing
with integer arithmetic even though it uses the "standard" syntax for
integer literals (well, excepting the prefix notation)?  Given Lisp's
departures from the "norm" in many other ways, I still don't see why
an argument about that which is normal should lead one to that which
is or should be normative for Lisp.

But, this is really far too meta, I suppose.  Arguing about the
argument...
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uekfqpy54.fsf@news.dtpq.com>
"dkixk" <·····@earthlink.net> writes:

> > You don't declare fixnum or floating literals - they manifestly READ
> > from their syntax as fixnums (or bignums).  But if you're trying to
> > say that Lisp makes you declare fixnums in order to bypass the usual
> > dynamic type safety, that is not something unique to fixnums.  [...]
> 
> Whether or not declarations can be used to effect the type of an
> object created by READing a literal is a red herring.  

You cannot ever change the type of an object READ this way,
and that's not what I was saying.

> The point I was trying to make with bignum/fixnum is that the
> default behavior of Lisp for integer arithmetic is not the "usual"
> behavior; e.g., the fact that (1+ most-positive-fixnum) gives the
> correct answer, assuming no declarations.  Lisp does quite a bit of
> extra work to ensure the correctness of this result even given the
> cost of the performance penalty for the general case of "+".

I don't see how this has anything to do with why Lisp is compatible
with other languages in its parsing of floating point literals.

> Well, I don't think that this statement that all programmers
> (or even most?) even know enough about FP to really be able
> to want can be assumed.

It is evidenced by the fact that most programming languages
do not provide any other meaning for such literal inputs.
It is further coroborated by the fact that the other kinds
of decimal number processing have been removed from modern
hardware and replaced with floating point hardware.
From: Per Bothner
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <0hDOd.3716$ZZ.3273@newssvr23.news.prodigy.net>
Christopher C. Stacy wrote:
> "Christophe Turle" <······@nospam.com> writes:
> 
> In the realm of general purpose programming with fractions, 
> there are three choices: floating point, fixed point, and rationals.
> Lisp gives you the two most useful: floating point is for doing fast
> calculations, and rationals are for doing exact computations.  
> Lisp doesn't offer the one that's limited, inexact, and slow.

What I think would be nice would be rationals, but with repeating
decimals, at least as the default input/output format.
E.g. (/ 4.0 3.0) --> 1._3_ where _3_ means a repeating decimal.
That gives you exact computations that are "human-oriented".

I don't know if there are algorithms for repeating-decimal
arithmetic that are better than converting to fractions first.
(It seems like there must be, at least for addition, subtraction,
and multiplication by a non-repeating decimal, and those as
presumably the most common operations.)  If not, it is possible
to use fractions internally and just use repeating decimals for
input/output.

> I don't know where you are coming up with the "majority" opinion.
> Apparently, though, you have been out-voted: floating point has
> been placed into all the hardware, and support for fixed-point
> (such as string-oriented numerics and BCD, which used to be fairly
> popular in hardware) have been removed.

Now that RISC is dead, and CISC is king, the trend may be reversing.
There seems to be an increasing demand for decimal arithmetic,
including decimal floating-point arithmetic, at least in software,
and possibly in hardware - see: http://www2.hursley.ibm.com/decimal/
Given that cycles are free these days [to a first approximation,
execution time is proportional to cache misses] priorties about
efficient algoriths have changed.
From: Pascal Bourguignon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87acqfautj.fsf@thalassa.informatimago.com>
"Christophe Turle" <······@nospam.com> writes:
> Scientifics programs are not the preponderant part of programs in the world.

Not anymore.

Perhaps instead of fighting floats here, you should begin with CAR and CDR.

Personnaly, I don't understand this need to erase history, and to
be wanting to prevent oneself to read legacy data.


As we've already said several times, if you have data that is not
legacy data (your cut-and-pasted rationals written as float), then
DON'T USE the lisp reader.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
The rule for today:
Touch my tail, I shred your hand.
New rule tomorrow.
From: Christophe Turle
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4209c40c$0$24788$626a14ce@news.free.fr>
"Pascal Bourguignon" <····@mouse-potato.com> a �crit dans le message de 
news: ··············@thalassa.informatimago.com...
>
> "Christophe Turle" <······@nospam.com> writes:
> Perhaps instead of fighting floats here, you should begin with CAR and 
> CDR.

CL spec already take care of this ;)

i'm always using FIRST instead of CAR because it seems more 'natural'. You 
see no compatibilty break when they include 'FIRST' to Lisp.

for CDR, i'm using it instead of REST. Perhaps because it doesn't seem to be 
more natural than CDR ?

>
> Personnaly, I don't understand this need to erase history,

erase ? did i tell i want to remove floats ? no i just wish them not to be 
the default.

> and to
> be wanting to prevent oneself to read legacy data.

You don't stop to say that despite my answers on this subject.
so, give an example of what you say. a legacy data string that will be 
prevent to be read with my proposition. And i will show you using preceding 
posts from me how to do.

> As we've already said several times,
> if you have data that is not
> legacy data (your cut-and-pasted rationals written as float), then
> DON'T USE the lisp reader.

I agree with your rule but in my example i think the premisses don't apply, 
you think the contrary.

you think they are rationals written as float. I think they are rationals 
written with human conventional rational notation. So i'm afraid we can't 
agree on this subject.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Duane Rettig
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4hdklpsu9.fsf@franz.com>
"Christophe Turle" <······@nospam.com> writes:

> you think they are rationals written as float. I think they are rationals 
> written with human conventional rational notation. So i'm afraid we can't 
> agree on this subject.

I agree with your philosophy; Common Lisp (and its ancestors that did
the same) have made the gesture toward the more granular "hardware"
floating-point usage than toward a more accurate "bigfloat" style to
represent rationals.  In an analogy in a very closely related situation,
Lisp takes back the concept of "integer" and relates it more closly
to mathematics than does C and other "hardware" oriented languages;
we Lispers tend to be proud of that fact.  And we are willing to
sacrifice some small amount of speed for the more accurate
representation of integers than the hardware has to offer.

And yet, when it comes to floating point usage, we tend to lean the
other direction, preferring the less accurate hardware representations,
even for floats that are really rational numbers.  We prefer these
even though a bigfloat package would make such rational-floats muh more
accurate.

So why the apparent schizophrenia in an otherwise corretness-driven
culture?  Well, given that hardware floats exist and need to be
supported (more on that later), I think that the biggest reason for
this is surprisingly simple: computers and humans don't think in the
same radix.  Here's my logic:

In your early schoolwork, when you are taught about the difference
between three different kinds of fractions: normal rationals, repeating
rationals, and irrationals, you are also given trivial ways to recognize
them.  Perhaps they are so trivial, that we don't even realize that we
are using the techniques.  I've seen several uses of the distinction
between .1111 and .1111... in this thread, but I don't recall if
anything has been explicitly said about their repeatability.
[Note also that in higher education (sometimes introduced earlier)
a distinction is made between .1111 and 1111/10000 in applied
mathematics, where 1111/10000 is a precise number but .1111 is a
number accurate to 4 decimal places; that is why you might also
see the number .1100, which has the same value as .11 but a
different precision]

Now, a number that can't be represented in a computer will not be as
accurate as one which can; it will instead be an approximation.  So
the candidates for ratio-ization are those which can be accurately
represented - .5 could potentially be read and ratio-ized as 1/2 in
"a Lisp that is not CL".  So what's the problem?  All we need to do
is to read the numbers, and if they are non-repeating, simply make ratios
out of them!  But hang on; what about .1?  It is indeed a non-repeating
rational (or is it?):

CL-USER(1): .1
0.1
CL-USER(2): :i *
A NEW single-float = 0.1 [#x3dcccccd] @ #x10632482
CL-USER(3): 

Why did .1 turn out to be a repeating rational instead of the obviously
clear-cut rational 1/10?  It is because our input is in base 10, and
the representation within the computer is in binary, base 2, and
sometimes a base-10 number that is not repeating will become repeating
in binary.  Therefore, it is no longer acccurate.

My theory is that if we had used binary in our normal notations, then
entering .1111 (which would, of course, now be a binary fraction)
might have reasonably been turned into the ratio 15/16 (though at that
point we would have written it 1111/10000 :-).  But the mismatch in
bases causes a mismatch in the printed representation of some fractions,
such that they become repeating fractions when read in using a base that
is different than the base being used to store the data.

Now, many of the hand-calculators of today do indeed read and store in
rational format; they convert the digits read int BCD notation, and
they do their calculations in BCD - essentially, they do not convert
from one base to another, but do all their calculations in base 10.
They pay a price; the calculations are slower than binary calculations,
but this is not bothersome for calculators, since their operation is
done within the limits of human data-entry time.

It should be noted that financial institutions that are customers
sometimes complain to us about the fact that conversions to binary
and then back to decimal introduce roundoff errors in financial
calculations, and they ask us for BCD support from the language.
It would be a nice thing to provide...

And finally, the question of why we (Lisp) must support floating-point
hardware is answered in one word:  speed.   Not the kind of speed
gain that you might get from eliminating the fixnum-to-bignum
overflow test, but the thousandsfold increases that occur when ffts
and simple logarithms are done; the speed-vs-accuracy tradeoff is
much easier to judge when your lisp program is still crunching some
signal-processing calculations when the C code has been done hours
ago.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <PYGdnQfKKfRy95ffRVn-gQ@golden.net>
Duane Rettig wrote:
> So why the apparent schizophrenia in an otherwise corretness-driven 
> culture?  Well, given that hardware floats exist and need to be 
> supported (more on that later), I think that the biggest reason for 
> this is surprisingly simple: computers and humans don't think in the
>  same radix.  Here's my logic:
> 
> In your early schoolwork, when you are taught about the difference 
> between three different kinds of fractions: normal rationals, 
> repeating rationals, and irrationals, you are also given trivial ways
>  to recognize them.  Perhaps they are so trivial, that we don't even 
> realize that we are using the techniques.  I've seen several uses of 
> the distinction between .1111 and .1111... in this thread, but I 
> don't recall if anything has been explicitly said about their 
> repeatability. [Note also that in higher education (sometimes 
> introduced earlier) a distinction is made between .1111 and 
> 1111/10000 in applied mathematics, where 1111/10000 is a precise 
> number but .1111 is a number accurate to 4 decimal places; that is 
> why you might also see the number .1100, which has the same value as 
> .11 but a different precision]
> 
> Now, a number that can't be represented in a computer will not be as
>  accurate as one which can; it will instead be an approximation.

I submit that any number can be represented in a computer, in either n
bits per digit or a small amount of code. Further, it is trivial to say
that the numbers we've been talking about in this thread are represented
exactly in the computer -- in the input stream. It is only when they get
converted to our favourite FP format that they can no longer be exactly
represented. Sorry to be pedantic, but by using phrases like "a number
that can't be represented in a computer" you're suggesting to the
suggestible that this is an inherent property of computation with binary
machines, rather than one of our chosen format.

> So the candidates for ratio-ization are those which can be accurately
>  represented - .5 could potentially be read and ratio-ized as 1/2 in
>  "a Lisp that is not CL".  So what's the problem?  All we need to do
>  is to read the numbers, and if they are non-repeating, simply make 
> ratios out of them!  But hang on; what about .1?  It is indeed a 
> non-repeating rational (or is it?):
> 
> CL-USER(1): .1 0.1 CL-USER(2): :i * A NEW single-float = 0.1 
> [#x3dcccccd] @ #x10632482 CL-USER(3):
> 
> Why did .1 turn out to be a repeating rational instead of the 
> obviously clear-cut rational 1/10?  It is because our input is in 
> base 10, and the representation within the computer is in binary, 
> base 2, and sometimes a base-10 number that is not repeating will 
> become repeating in binary.  Therefore, it is no longer acccurate.

Here I must object to your use of the word "sometimes" as it drastically
understates the extent of the problem. Approximately EVERY base-10 float
that a user could write down repeats in base-2. Wade Humeniuk and others
did the math on this last December in a thread entitled "Floating-point
arithmetic in CL."


> Now, many of the hand-calculators of today do indeed read and store 
> in rational format; they convert the digits read int BCD notation, 
> and they do their calculations in BCD - essentially, they do not 
> convert from one base to another, but do all their calculations in 
> base 10. They pay a price; the calculations are slower than binary 
> calculations, but this is not bothersome for calculators, since their
>  operation is done within the limits of human data-entry time.

What about DBMSs? Don't they pay a price in order to accurately
calculate in BCD?

> It should be noted that financial institutions that are customers 
> sometimes complain to us about the fact that conversions to binary 
> and then back to decimal introduce roundoff errors in financial 
> calculations, and they ask us for BCD support from the language. It 
> would be a nice thing to provide...

Whenever some poor programmer makes the mistake of using FP for currency
calculations, there's a Greek chorus of grizzled hackers singing that he
should have known better. But people keep doing it -- it's natural
behaviour. How many million transactions per day do you suppose are
handled with floating point software that gets the pennies wrong when
the numbers exceed a certain size? It's all buggy software, and a
misapplication of FP, but it is a common trap. Every day, somewhere,
more code with this bug goes into production.

> And finally, the question of why we (Lisp) must support 
> floating-point hardware is answered in one word:  speed.   Not the 
> kind of speed gain that you might get from eliminating the 
> fixnum-to-bignum overflow test, but the thousandsfold increases that 
> occur when ffts and simple logarithms are done; the speed-vs-accuracy
>  tradeoff is much easier to judge when your lisp program is still 
> crunching some signal-processing calculations when the C code has 
> been done hours ago.

I don't think anyone is suggesting that Lisp stop supporting binary
floating point hardware. In fact, I'd argue that the standard should be
updated to BETTER support modern FP hardware - at least to expose to the
programmer all of the IEEE behaviour that the CPU does.

FFT? Please. That's another large group of FP programmers who have left
the scene. Today, if you need really fast FFT you buy a $10 chip from
Texas Instruments. If you need fast FFTs on general purpose hardware you
FFI to the Fastest Fourier Transform in the West. [Aside: FFTW
was a real eye opener for me when I was a C drone. What the heck was
Objective Caml, and who used it for high performance numerics? Could it
be that there were smarter languages for smarter programmers?]

Most programmers and most programs need reliable decimal numeric
manipulation more than they need fast approximate binary answers. The
ones who DO need approximate answers usually need more error interval
analysis than they've bothered doing (i.e. their programs may or may not
be buggy). The percentage of floating point code out there which isn't
broken now but might break if the default input format were to change
(i.e. was coded competently for binary FP) is very, very small.

Binary FP is really good at what it is good at. But since most
programmers don't understand it, most extant code that uses it is buggy
(or works by accident rather than design). Many of the people who have
needed speed the most have moved to special purpose hardware (3D
Graphics Processing Units, floating point DSPs). So why should it
continue to be the default format for programming general purpose CPUs?
We know from experience that as long as it is, ignorance about its
stunningly unintuitive properties will continue to be the cause of a
good fraction of the most insidious bugs known to programmers.
From: jayessay
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3vf91ibf9.fsf@rigel.goldenthreadtech.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> I submit that any number can be represented in a computer, in either
> n bits per digit or a small amount of code.

And, in general, you'd be dead wrong.


> What about DBMSs? Don't they pay a price in order to accurately
> calculate in BCD?

Irrelevant.


> Whenever some poor programmer makes the mistake of using FP for currency
> calculations, there's a Greek chorus of grizzled hackers singing that he
> should have known better. But people keep doing it -- it's natural
> behaviour.

Indeed, stupidity is definitely natural behavior.  This is even more
apropos in the context of most programmers.


> How many million transactions per day do you suppose are handled
> with floating point software that gets the pennies wrong when the
> numbers exceed a certain size? It's all buggy software, and a
> misapplication of FP, but it is a common trap. Every day, somewhere,
> more code with this bug goes into production.

Shrug.  If true, it would appear that nobody actually cares or for all
anyone else knows, it may be done that way on purpose.


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: William Bland
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <pan.2005.02.09.20.15.39.997195@abstractnonsense.com>
On Wed, 09 Feb 2005 14:56:30 -0500, Cameron MacKinnon wrote:
> 
> I submit that any number can be represented in a computer

I think I hear Turing spinning in his grave ;-)

Google:  "computable number"

Cheers,
	Bill.
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uwttha3po.fsf@news.dtpq.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> How many million transactions per day do you suppose are handled
> with floating point software that gets the pennies wrong when the
> numbers exceed a certain size? It's all buggy software, and a
> misapplication of FP, but it is a common trap. Every day, somewhere,
> more code with this bug goes into production.

I don't believe this.  I think they know better.

I saw this just because all such code that I've seen didn't 
have this bug, and every application programmer I've talked 
to in the last 30 years knew better.  Of course that's not
a scientific sample.  But if this is some rampant problem,
as you suggest, why aren't there millions of people complaining?
Where are the news reports, lawsuits, and government investigations
into this problem?
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87mzuci9k3.fsf@nyct.net>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Cameron MacKinnon <··········@clearspot.net> writes:
>
>> How many million transactions per day do you suppose are handled
>> with floating point software that gets the pennies wrong when the
>> numbers exceed a certain size?
> 
> I don't believe this.  I think they know better.

You're mostly right. The senior developers there usually know better. 
But sometimes the good senior developers get there after the software is
mostly complete, so they don't care much.

> I saw this just because all such code that I've seen didn't 
> have this bug, and every application programmer I've talked 
> to in the last 30 years knew better.  Of course that's not
> a scientific sample.

The more recent ones were probably all Lisp programmers, so they were
good programmers, so they understood what FP numbers are. Although that
correlation seems to have weakened in the past few months. The less
recent ones probably knew about FP just because it was so vastly
important back then, irrelevant of how good of a programmer they are.

> But if this is some rampant problem,
> as you suggest, why aren't there millions of people complaining?
> Where are the news reports, lawsuits, and government investigations
> into this problem?

Who cares about a penny here or there? My client's clients round off
prices to the nearest 6 decimals, IIRC, before allocating the money into
accounts. They trade in increments of 10000 of these prices, but they
often allocate to accounts in smaller portions. However, this is needed
for legal reasons (giving all accounts an equal price). The brokers in
the system are more than happy to fudge the actual amount involved in
the trade. After all, they're charging commissions that far exceed those
pennies.

I've only seen one case where it's an issue, and that's because they
want to evenly distribute 2 cents among 3 people. They're not even
willing to fork over the dollar a year that it would take to distribute
an extra penny each time this happens. However, they report their
financials rounded off to the nearest thousand dollars. Go figure.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <u4qgkyxe5.fsf@news.dtpq.com>
Rahul Jain <·····@nyct.net> writes:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> > Cameron MacKinnon <··········@clearspot.net> writes:
> >
> >> How many million transactions per day do you suppose are handled
> >> with floating point software that gets the pennies wrong when the
> >> numbers exceed a certain size?
> > 
> > I don't believe this.  I think they know better.
> 
> You're mostly right. The senior developers there usually know better. 
> But sometimes the good senior developers get there after the software is
> mostly complete, so they don't care much.
> 
> > I saw this just because all such code that I've seen didn't 
> > have this bug, and every application programmer I've talked 
> > to in the last 30 years knew better.  Of course that's not
> > a scientific sample.
> 
> The more recent ones were probably all Lisp programmers, 

Actually, I wasn't thinking of Lisp programmers at all.
(Like you, I take it for granted that Lisp know better. 
On the other hand, I've occasionally found that's not always true,
since most of them never had to deal with domains so mundane as
monetary applications.)

I was thinking of BASIC and C and COBOL and JAVA and APL programmers 
who I have worked with or known over the years, whose programs are all
about accounts and inventories.  Those people know how to frob dollars
and cents without screwing it up.

> Who cares about a penny here or there? My client's clients round off
> prices to the nearest 6 decimals, IIRC, before allocating the money into
> accounts. They trade in increments of 10000 of these prices, but they
> often allocate to accounts in smaller portions.

The last company I worked at doing financials was a long distance
telephone company, which involved telephone calls, many kinds of
direct and third party billing, taxes, commission accounts, 
bad debt and profit allocations, collections, and so on.
I can assure you that pennies were everything!
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87ekfof7c6.fsf@nyct.net>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Actually, I wasn't thinking of Lisp programmers at all.
> (Like you, I take it for granted that Lisp know better. 
> On the other hand, I've occasionally found that's not always true,
> since most of them never had to deal with domains so mundane as
> monetary applications.)
>
> I was thinking of BASIC and C and COBOL and JAVA and APL programmers 
> who I have worked with or known over the years, whose programs are all
> about accounts and inventories.  Those people know how to frob dollars
> and cents without screwing it up.

These are the types of people I work with, the ones building the markets
where these dollars and cents get from where they are to where they're
wanted. I've had to teach them repeatedly about how FP works and about
why Java's Double.toString() is the wrong way to display a value that
comes in as 3 1/8, then gets saved as 0.03125 and then read back and
multipled by 100. You want to round to the nearest 16th, THEN display
that value.

The people involved in actually holding and transferring that money care
even less about pennies because the bid/ask spreads are often wide
enough that the variability in the market from one moment to the next
(and we're talking seconds in some markets, fractions in a few) is
equivalent to many dollars of difference.

Maybe this is because the transactions typically involve at least $1000
at a time, usually $1 to 10 million.

>> Who cares about a penny here or there? My client's clients round off
>> prices to the nearest 6 decimals, IIRC, before allocating the money into
>> accounts. They trade in increments of 10000 of these prices, but they
>> often allocate to accounts in smaller portions.
>
> The last company I worked at doing financials was a long distance
> telephone company, which involved telephone calls, many kinds of
> direct and third party billing, taxes, commission accounts, 
> bad debt and profit allocations, collections, and so on.
> I can assure you that pennies were everything!

Fair enough, but FP is fine for that stuff because the precision needed
isn't 50 digits, even if the range of all numbers involved goes from
10,000,000 to 0.0001. You typically only deal with numbers of the same
size when adding. As long as you're careful when adding the values
together (divide and conquer), you'll be fine. In the end, you'll be
rounding to the nearest 1000 anyway. That's all the shareholders care
about.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3mzubxorj.fsf@athena.pienet>
Rahul Jain <·····@nyct.net> writes:


> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> 
> >> Who cares about a penny here or there? My client's clients round off
> >> prices to the nearest 6 decimals, IIRC, before allocating the money into
> >> accounts. They trade in increments of 10000 of these prices, but they
> >> often allocate to accounts in smaller portions.
> >
> > The last company I worked at doing financials was a long distance
> > telephone company, which involved telephone calls, many kinds of
> > direct and third party billing, taxes, commission accounts, 
> > bad debt and profit allocations, collections, and so on.
> > I can assure you that pennies were everything!
> 
> Fair enough, but FP is fine for that stuff because the precision needed
> isn't 50 digits, even if the range of all numbers involved goes from
> 10,000,000 to 0.0001. You typically only deal with numbers of the same
> size when adding. As long as you're careful when adding the values
> together (divide and conquer), you'll be fine. In the end, you'll be
> rounding to the nearest 1000 anyway. That's all the shareholders care
> about.
> 

I did a financial system that accounted for treasury bills and notes,
carrying balances of up to about $1 billion, with monthly turnover in
the hundreds of millions.  The various interest rates we used were
usually 8 or 9 decimal places long and everything had to add up to the
penny at all times with cross-checking of various numbers various ways.
The system was subjected to rigorous yearly audits; print the report
tables (3 or 4" stack of paper) and the external auditors come in with
their calculators to check things out.  At that point your stuff had
better be right or things start getting tense around the office.

Because it was the only game in town when we wrote it, we used floating
point for the arithmetic, and as one would expect, once the system
started getting big and complete enough, the penny problems started
showing up.  The arithmetic was sufficiently involved that we were
essentially guaranteed to run into FP problems no matter how you recast
the math.

It was such a miserable problem, we had to abandon FP altogether and
write a set of fixed point math functions- add, subtract, multiply,
divide and a couple functions to import and export values- to do the
math with rounding at the correct decimal place, yet carry 20 to 30-some
total decimal places so some intermediate terms in the arithmetic
wouldn't lose precision.

As you would expect, it had a huge performance impact, but it made the
system usable.  Accountants don't mind if their interest calculation
takes a little longer- but they will drive you insane working out where
those beastly pennies are going.

So nowadays, with the proliferation of math libraries and better
languages, there is no good reason to use FP for financial arithmetic-
and doing so is asking for trouble.  It may work in some cases, but that
FP rounding problem is there waiting for you- and you won't have any
warning when it happens.  It probably already is happening and its not
been noticed because of the lack of auditing- or perhaps you've been
lucky and the rounding up errors are more or less cancelled by the
rounding down ones.

If you think the shareholders don't care, I hope you're there to answer
the questions when accounts go from 51000 to 50999 shares because of an
FP error in your code.  Bring a sleeping bag to the office, and you'll
need to find someplace to get your 3 hours of sleep a night while you
fix the system.

Gregm
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uis4zf08n.fsf@nhplace.com>
Greg Menke <··········@toadmail.com> writes:

> As you would expect, it had a huge performance impact, but it made the
> system usable.  Accountants don't mind if their interest calculation
> takes a little longer- but they will drive you insane working out where
> those beastly pennies are going.

Of curiosity, did the accounts happen to compute whether they were
spending more (in programmer development and QA time, in delay time,
in electricity and air conditioning, etc) than a penny to make their
computation penny accurate?

McDonald's patrons don't care if it costs more to the environment to
have disposable containers wrapped around every food item for the 2
minutes between the time the food comes out of the microwave and it
goes into their mouth.  But I have to wonder if they'd reach the same
conclusion if they were taxed by local towns for the amount of
landfill all that trash were producing...  It might bring the true
cost to a high enough value that they'd prefer what had at first
seemed a messier solution... like "returnable/washable containers" or
"bring your own container" or ...

In complex analyses of costs in a system, there's often something that
looks like an 'electrical ground' symbol somewhere on the chart.  And
when you ask it is described as 'infinite source or sink of electrons'
(or money or landfill or whatever).  But sometimes I wonder what would
happen if you weren't allowed to do that kind of blind-eye analysis
that ignores that these grounds are not infinite.  What if they had to
really account for the whole system?  The usual answer, ironically, is
that it would be too expensive to do such a calculation.

Then I wonder sometimes how one can know at the outset that it's too
expensive to do a calculation about waste, and at the same time know
that it's not too expensive to do a huge project to enable
calculations that avoid tiny amounts of waste.  I guess people like to
decide first and compute later, only to confirm what they think they
already know.

I was told when I was at MIT back in the late 1970's that Boston's
charge of $0.25 to ride the subway was not enough to pay for the
people who collected those quarters, and that the taxpayers would save
money if they didn't charge the fee.  But that taxpayers like to see
money coming from people who use government services.

People (like numbers, I guess) are not always rational.
From: Gorbag
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <9U6Pd.4$IK.1@bos-service2.ext.ray.com>
"Kent M Pitman" <······@nhplace.com> wrote in message
··················@nhplace.com...
> I was told when I was at MIT back in the late 1970's that Boston's
> charge of $0.25 to ride the subway was not enough to pay for the
> people who collected those quarters, and that the taxpayers would save
> money if they didn't charge the fee.  But that taxpayers like to see
> money coming from people who use government services.
>
> People (like numbers, I guess) are not always rational.

Actually, being from Boston, you should know the (Boston) Common's problem.
That the .25 didn't cover the cost was not as important as making sure the
cost was non-zero and non-trivial (hence utilization didn't attempt to
approximate infinity).
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <K4idnShKZ6RknJDfRVn-ig@golden.net>
Kent M Pitman wrote:
> Greg Menke <··········@toadmail.com> writes:
> 
> 
>> As you would expect, it had a huge performance impact, but it made
>> the system usable.  Accountants don't mind if their interest
>> calculation takes a little longer- but they will drive you insane
>> working out where those beastly pennies are going.
> 
> 
> Of curiosity, did the accounts happen to compute whether they were 
> spending more (in programmer development and QA time, in delay time, 
> in electricity and air conditioning, etc) than a penny to make their 
> computation penny accurate?

Peace of mind is worth more than a penny. One of the foundations of
western civilization is double entry accounting, whose primary tenet is
"thou shalt balance." When books don't balance, it's usually a good bet
that someone's getting screwed, but whom and by how much are more
difficult to judge. Accountants demand exactitude, and rightly so.

Further, in Greg's particular example (and in securities trade
processing in general) the pennies really do add up quickly due to the
huge volumes involved. And if you owe or are owed interest by a
counterparty, it behooves you to know exactly how much, so that when
their figures disagree you can confidently tell them they're wrong.
(Yes, I've coded financial systems. No, working for accountants isn't fun.)


> McDonald's patrons don't care if it costs more to the environment to 
> have disposable containers wrapped around every food item for the 2 
> minutes between the time the food comes out of the microwave and it 
> goes into their mouth.  But I have to wonder if they'd reach the same
>  conclusion if they were taxed by local towns for the amount of 
> landfill all that trash were producing...

Much as I agree with the sentiment, the analogy is flawed. Accountants
who insist on their financial systems being correct to the penny pay the
full cost of those systems -- there's no externalities whose costs
aren't being factored in.

Trying to convince an accountant that he doesn't need exact answers
because the computer has "close enough" in hardware won't fly, as Greg
has said. And why should it? Would you try to convince a biologist of
the worthiness of a computation that mistakenly frobbed one out of every
n base pairs? Or a publisher of a typesetting system that dropped the
occasional 'e'?

--
"Help me! Help meeee!" -- Vincent Price
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <uis4y3lfz.fsf@nhplace.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> Much as I agree with the sentiment, the analogy is flawed. Accountants
> who insist on their financial systems being correct to the penny pay the
> full cost of those systems -- there's no externalities whose costs
> aren't being factored in.

Nonsense, they bill this right back to the client and it reflects itself
as my bank hiking its monthly fee.

Banks used to pay interest for holding my money.  Now they make me pay.
That's progress for you.  And I'm QUITE sure it's because the bank isn't
the slightest concerned about the cost of correctness; it just passes it
right on to the one person in the equation who has no control over the
process.

Now if bank customers could vote on what services they wanted and what
they should pay, THAT would be progress.

> Trying to convince an accountant that he doesn't need exact answers
> because the computer has "close enough" in hardware won't fly, as Greg
> has said. And why should it? Would you try to convince a biologist of
> the worthiness of a computation that mistakenly frobbed one out of every
> n base pairs? Or a publisher of a typesetting system that dropped the
> occasional 'e'?

No, but I would tell a biologist he couldn't have funding to find things
that his funding agency just doesn't care about, and I would tell a 
publisher not always to obsess over whether a point is 1/72 or 1/72.25 
inches if doing so would cost me more money getting my book published.
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <MLudnT0piItbhpDfRVn-1g@golden.net>
Kent M Pitman wrote:
> Banks used to pay interest for holding my money.  Now they make me 
> pay. That's progress for you.  And I'm QUITE sure it's because the 
> bank isn't the slightest concerned about the cost of correctness; it 
> just passes it right on to the one person in the equation who has no 
> control over the process.

Well, it isn't just banks that have accounting systems, all businesses
do. And if banks could reduce their costs tomorrow, they sure wouldn't
pass them on to you.

I sense some hostility towards banks. I'll let you in on a little
secret: If you list the businesses which regularly nickel and dime you
and everybody else (banks, cable, local phone...) you'll find that some
of them are very profitable. Put your money into their stock instead of
the bank. Be careful, though. Some of these companies think their
profits come from talented management rather than monopolies, so they
diversify into other more competitive areas of the economy and get fleeced.

But really, given your government's current fiscal and monetary
policies, I'd be more worried about inflation than service charges if I
were you. Move it offshore.
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3fz02e0cr.fsf@athena.pienet>
Kent M Pitman <······@nhplace.com> writes:

> Cameron MacKinnon <··········@clearspot.net> writes:
> 
> > Much as I agree with the sentiment, the analogy is flawed. Accountants
> > who insist on their financial systems being correct to the penny pay the
> > full cost of those systems -- there's no externalities whose costs
> > aren't being factored in.
> 
> Nonsense, they bill this right back to the client and it reflects itself
> as my bank hiking its monthly fee.
> 
> Banks used to pay interest for holding my money.  Now they make me pay.
> That's progress for you.  And I'm QUITE sure it's because the bank isn't
> the slightest concerned about the cost of correctness; it just passes it
> right on to the one person in the equation who has no control over the
> process.

Its absolutely true that you're helping to finance the fanatical pursuit
of rounding errors.  However there isn't much of an alternative.  How
much are you willing to pay for correct bank statements?  Or even for an
estimate for how much they're wrong?  Or, do you want them to be
correct, so your check ledger adds up to the same total as the bank's?
Would you be happy if your balance shrinks by .01% every 2 weeks just
because of a math error that the bank feels like they don't have to fix?

Would you be happy if the IRS didn't chase rounding errors too?  Taxes
are bad enough, but what about the situation where rounding costs you
extra one year and too little the next and there is no justifiable
reason for the variation- just inscrutable rounding errors happening in
equally inscrutable systems.

Believe me, the systems people are very well aware of how much time and
money is sunk into chasing pennies- there is continual pressure to
reduce both, because it bloats the development budgets.  And when
systemic math errors are found later rather than sooner, handling them
much less fixing them is always more expensive.  Long term math errors
spread around financial databases make a mess of the arithmetic and lead
to all kinds of crazy patches.


> Now if bank customers could vote on what services they wanted and what
> they should pay, THAT would be progress.

Banking progress for me is crediting my deposits when they actually
clear, not a week later- which allows the bank to float my money and
earn interest off it before they actually give it to me.

 
> > Trying to convince an accountant that he doesn't need exact answers
> > because the computer has "close enough" in hardware won't fly, as Greg
> > has said. And why should it? Would you try to convince a biologist of
> > the worthiness of a computation that mistakenly frobbed one out of every
> > n base pairs? Or a publisher of a typesetting system that dropped the
> > occasional 'e'?
> 
> No, but I would tell a biologist he couldn't have funding to find things
> that his funding agency just doesn't care about, and I would tell a 
> publisher not always to obsess over whether a point is 1/72 or 1/72.25 
> inches if doing so would cost me more money getting my book published.

You might not care.  But the engineer designing the car or train you
ride in had better- and the machinist that turns the driveshafts had
better care too.  Would you be happy entrusting your life to machines
and systems where the people who built them didn't fuss out the
precision that they need to both make the system work, but also have a
well-defined safety margin?

Required precision and accuracy are not of equivalent degrees in all
applications.  Sometimes you don't have to care much beyond some couple
decimal places, but sometimes if you don't care out to 10 or 12 then
people are going to get killed.

Gregm
From: Thomas A. Russ
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <ymism3z0z5b.fsf@sevak.isi.edu>
Greg Menke <··········@toadmail.com> writes:

> Would you be happy if the IRS didn't chase rounding errors too?  Taxes
> are bad enough, but what about the situation where rounding costs you
> extra one year and too little the next and there is no justifiable
> reason for the variation- just inscrutable rounding errors happening in
> equally inscrutable systems.

Well, actually, the IRS encourages rounding to whole dollars, so you get
to work with integer math anyway :)

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87ekfabme8.fsf@nyct.net>
Greg Menke <··········@toadmail.com> writes:

> Banking progress for me is crediting my deposits when they actually
> clear, not a week later- which allows the bank to float my money and
> earn interest off it before they actually give it to me.

That's no problem. Just get a job writing software for Repo traders and
you'll get more than your share of that money. :)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3ekfanl8y.fsf@athena.pienet>
Rahul Jain <·····@nyct.net> writes:
> Greg Menke <··········@toadmail.com> writes:
> 
> > Banking progress for me is crediting my deposits when they actually
> > clear, not a week later- which allows the bank to float my money and
> > earn interest off it before they actually give it to me.
> 
> That's no problem. Just get a job writing software for Repo traders and
> you'll get more than your share of that money. :)

Or be a big enough customer so the bank actually cares about you.

Gregm
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87is4mbml4.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> Further, in Greg's particular example (and in securities trade
> processing in general) the pennies really do add up quickly due to the
> huge volumes involved. And if you owe or are owed interest by a
> counterparty, it behooves you to know exactly how much, so that when
> their figures disagree you can confidently tell them they're wrong.
> (Yes, I've coded financial systems. No, working for accountants isn't fun.)

They always disagree anyway. Do you accrue interest on a market holiday
or not? On a weekend? Those questions make even more difference than
fractions of a penny in rounding errors. Also, if the errors add up,
you're using a retarded FPU or you've just hit a shocking series of
coincidences.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3k6p2nlb0.fsf@athena.pienet>
Rahul Jain <·····@nyct.net> writes:

> Cameron MacKinnon <··········@clearspot.net> writes:
> 
> > Further, in Greg's particular example (and in securities trade
> > processing in general) the pennies really do add up quickly due to the
> > huge volumes involved. And if you owe or are owed interest by a
> > counterparty, it behooves you to know exactly how much, so that when
> > their figures disagree you can confidently tell them they're wrong.
> > (Yes, I've coded financial systems. No, working for accountants isn't fun.)
> 
> They always disagree anyway. Do you accrue interest on a market holiday
> or not? On a weekend? Those questions make even more difference than
> fractions of a penny in rounding errors. Also, if the errors add up,
> you're using a retarded FPU or you've just hit a shocking series of
> coincidences.
> 

There are disagreements, but there are well defined rules for how
interest is accrued.  I imagine the trick is making sure both parties
agree on what set of rules.  FP rounding errors have a way of
compounding as an algorithm works its way through to conclusion.  These
errors arent FPU faults, it is working as designed- the problem is using
floating point in an inappropriate situation, where its design tradeoffs
affect the correctness of the results.

Gregm
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3r7jmeqwb.fsf@athena.pienet>
Kent M Pitman <······@nhplace.com> writes:


> Greg Menke <··········@toadmail.com> writes:
> 
> > As you would expect, it had a huge performance impact, but it made the
> > system usable.  Accountants don't mind if their interest calculation
> > takes a little longer- but they will drive you insane working out where
> > those beastly pennies are going.
> 
> Of curiosity, did the accounts happen to compute whether they were
> spending more (in programmer development and QA time, in delay time,
> in electricity and air conditioning, etc) than a penny to make their
> computation penny accurate?

Oh it cost vastly more to fix the pennies than their total worth,
probably 4 or 5 orders of magnitude more.  The standard joke goes like
"Can't I just give you $50 to cover the rounding errors we expect to
have for the life of the system?"  Its a standard joke because the
accountants want their pennies to add up and thats the end of the story.
They're very happy to make them add up more inexpensively, but job #1 is
to make them add up.  Sort of like job #1 for a doctor is to not kill
his/her patients.

 
> Then I wonder sometimes how one can know at the outset that it's too
> expensive to do a calculation about waste, and at the same time know
> that it's not too expensive to do a huge project to enable
> calculations that avoid tiny amounts of waste.  I guess people like to
> decide first and compute later, only to confirm what they think they
> already know.

When you're talking about accounting, its all about exactness or at
least exact in-exactness- meaning the errors are located in a
well-defined box so they can be managed.  Accountants will catalog waste
just as happily as they do money, if the system they're working within
makes it important to do so.  Cost-effectiveness is a different and
important question- but not one to be answered within the domain of
accounting.  The people hiring the accountants should address that
question and establish what needs to be accounted for and what does
not.  In my case, federal law established what and how the system had to
account for, so that was that.  

All in all, the system we implemented ended up a lot cheaper and more
flexible than the dreadful old mainframe system it replaced- so thats
one measure of cost-effectiveness.

> 
> People (like numbers, I guess) are not always rational.

Very true.

Gregm
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87acpybma2.fsf@nyct.net>
Greg Menke <··········@toadmail.com> writes:

> Oh it cost vastly more to fix the pennies than their total worth,
> probably 4 or 5 orders of magnitude more.  The standard joke goes like
> "Can't I just give you $50 to cover the rounding errors we expect to
> have for the life of the system?"  Its a standard joke because the
> accountants want their pennies to add up and thats the end of the story.
> They're very happy to make them add up more inexpensively, but job #1 is
> to make them add up.  Sort of like job #1 for a doctor is to not kill
> his/her patients.

However, the value of life in pretty much all parts of the world is much
larger than pennies here and there.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m38y5inkss.fsf@athena.pienet>
Rahul Jain <·····@nyct.net> writes:

> Greg Menke <··········@toadmail.com> writes:
> 
> > Oh it cost vastly more to fix the pennies than their total worth,
> > probably 4 or 5 orders of magnitude more.  The standard joke goes like
> > "Can't I just give you $50 to cover the rounding errors we expect to
> > have for the life of the system?"  Its a standard joke because the
> > accountants want their pennies to add up and thats the end of the story.
> > They're very happy to make them add up more inexpensively, but job #1 is
> > to make them add up.  Sort of like job #1 for a doctor is to not kill
> > his/her patients.
> 
> However, the value of life in pretty much all parts of the world is much
> larger than pennies here and there.
> 

True, but we're not talking about the value of life, we're talking about
accounting.  And from that perspective, human life hasn't much of any
value- generally people are accounted as costs whereas capital equipment
and facilities are resources.  This is accounting, not HR or public
relations.

Gregm
From: Damien Kick
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <sm42ofom.fsf@email.mot.com>
Kent M Pitman <······@nhplace.com> writes:

> Greg Menke <··········@toadmail.com> writes:
> 
> > As you would expect, it had a huge performance impact, but it made the
> > system usable.  Accountants don't mind if their interest calculation
> > takes a little longer- but they will drive you insane working out where
> > those beastly pennies are going.
> 
> Of curiosity, did the accounts happen to compute whether they were
> spending more (in programmer development and QA time, in delay time,
> in electricity and air conditioning, etc) than a penny to make their
> computation penny accurate?

And would it make a difference to their results whether or not the
computations they were doing to determine the cost of penny accurate
computations were being done at penny accuracy <smile>?
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87mztybmuc.fsf@nyct.net>
Greg Menke <··········@toadmail.com> writes:

> I did a financial system that accounted for treasury bills and notes,
> carrying balances of up to about $1 billion, with monthly turnover in
> the hundreds of millions.  The various interest rates we used were
> usually 8 or 9 decimal places long

Curious. The people who buy and sell these securities on a daily basis
laugh hysterically when they see a yield quoted to the 4th decimal
place.

> [...]

Is this for a pension plan or for an investment plan? Sounds like a
variable annuity, which is such a scam anyway that they deserve to be
beaten up by everyone, no matter whether the pennies add up or not.

> If you think the shareholders don't care, I hope you're there to answer
> the questions when accounts go from 51000 to 50999 shares because of an
> FP error in your code.  Bring a sleeping bag to the office, and you'll
> need to find someplace to get your 3 hours of sleep a night while you
> fix the system.

I'm not sure why one would use FP to count shares... In any case,
fractional shares are only counted to 3 decimal places in the cases I've
seen. Still, who cares about pennies when you're dealing with securities
with volatilities over 1% per month?

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3psyunlh1.fsf@athena.pienet>
Rahul Jain <·····@nyct.net> writes:

> Greg Menke <··········@toadmail.com> writes:
> 
> > I did a financial system that accounted for treasury bills and notes,
> > carrying balances of up to about $1 billion, with monthly turnover in
> > the hundreds of millions.  The various interest rates we used were
> > usually 8 or 9 decimal places long
> 
> Curious. The people who buy and sell these securities on a daily basis
> laugh hysterically when they see a yield quoted to the 4th decimal
> place.

Well I'm glad they have the lesiure to laugh about it.  We had to follow
Treasury Department rules for how we handled rounding & interest rates.
The yield and interest rates are always 8 or 9 places long and the mandated
rounding rules apply out at 12 or 13 decimal places IIRC.


> > [...]
> 
> Is this for a pension plan or for an investment plan? Sounds like a
> variable annuity, which is such a scam anyway that they deserve to be
> beaten up by everyone, no matter whether the pennies add up or not.

This system tracked payments by electric utilities using nuclear
reactors for power generation into a common fund intended to finance
waste disposal facilities for the spent fuel.  They paid by the kilowatt
hour generated.  The system also handled payments to help finance
destruction and disposal of decomissioned nuclear ordinance and R&D
facility waste disposal.  The idea is to take these payments in, invest
in tbills and notes, and overnight floats- and pay out into the various
projects its supposed to finance.


 
> > If you think the shareholders don't care, I hope you're there to answer
> > the questions when accounts go from 51000 to 50999 shares because of an
> > FP error in your code.  Bring a sleeping bag to the office, and you'll
> > need to find someplace to get your 3 hours of sleep a night while you
> > fix the system.
> 
> I'm not sure why one would use FP to count shares... In any case,
> fractional shares are only counted to 3 decimal places in the cases I've
> seen. Still, who cares about pennies when you're dealing with securities
> with volatilities over 1% per month?

If all you care about are 3 decimal places, then you <probably> won't be
running into trouble- but there is no guarantee it isn't happening right
now and is disguised by the nature of the algorithms.  Sometimes
financial calculations using FP work fine until some inflection point is
reached and then the rounding errors start creeping in.

From an accounting standpoint your last statement is simply insane-
however much it makes sense from an investments perspective.  If you
persisted in that attitude on any of the financial systems I had
experience with, the users would quickly have you out of the project.

Gregm
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87psylnqxy.fsf@nyct.net>
Greg Menke <··········@toadmail.com> writes:

> Rahul Jain <·····@nyct.net> writes:
>
>> Greg Menke <··········@toadmail.com> writes:
>> 
> From an accounting standpoint your last statement is simply insane-
> however much it makes sense from an investments perspective.  If you
> persisted in that attitude on any of the financial systems I had
> experience with, the users would quickly have you out of the project.

Yes, I see what you mean, but it's rather interesting that traders don't
care about pennies, but the people who write the reports they base their
trading decisions on do.... until they report their results where any
number of single-penny errors wouldn't make a difference to the final
result in most cases.

>> That's no problem. Just get a job writing software for Repo traders and
>> you'll get more than your share of that money. :)
>
> Or be a big enough customer so the bank actually cares about you.

Yes, in that case, you'll also be paying the bank for the interest
earned until you deliver the securities you sold, on the flip side.

>> > Oh it cost vastly more to fix the pennies than their total worth,
>> > probably 4 or 5 orders of magnitude more.  The standard joke goes like
>> > "Can't I just give you $50 to cover the rounding errors we expect to
>> > have for the life of the system?"  Its a standard joke because the
>> > accountants want their pennies to add up and thats the end of the story.
>> > They're very happy to make them add up more inexpensively, but job #1 is
>> > to make them add up.  Sort of like job #1 for a doctor is to not kill
>> > his/her patients.
>> 
>> However, the value of life in pretty much all parts of the world is much
>> larger than pennies here and there.
>> 
>
> True, but we're not talking about the value of life, we're talking about
> accounting.  And from that perspective, human life hasn't much of any
> value- generally people are accounted as costs whereas capital equipment
> and facilities are resources.  This is accounting, not HR or public
> relations.

I'm talking about the job of the doctor being to save lives vs. the job
of the accountant being to count money. Not killing someone is a rather
more important goal relative to the job compared to making sure that
someone didn't leave a penny on the floor of the deli.

> There are disagreements, but there are well defined rules for how
> interest is accrued.

The great thing about standards is... :)

> I imagine the trick is making sure both parties agree on what set of
> rules.

Indeed, that would be quite a trick. They also need to agree on which
hardware and language needs to be used to compute the accrued interest,
because the results from one of the industry-standard securities
computation libraries is different even on Intel x86 depending on
whether you use the Java or C/JNI versions of the library. Probably
that's why they provide a JNI version of it. People staeted complaining
that the results given by the Java version disagree with the C version,
so they wrapped up the C code for use by JNI and sold that as a separate
option.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christopher C. Stacy
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <usm45a3nq.fsf@news.dtpq.com>
Cameron MacKinnon <··········@clearspot.net> writes:
> Most programmers and most programs need reliable decimal numeric
> manipulation more than they need fast approximate binary answers.

I don't believe this, either.
From: ············@reillyhayes.com
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <1107990518.845665.141500@o13g2000cwo.googlegroups.com>
Decimal arithmetic is exceptionally useful for financial applications,
which is why it has been a feature of mainframe computers for decades.
The interest you earn on an interest bearing bank account was likely
calculated using decimal arithmetic.
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87is50i9fr.fsf@nyct.net>
············@reillyhayes.com writes:

> Decimal arithmetic is exceptionally useful for financial applications,
> which is why it has been a feature of mainframe computers for decades.
> The interest you earn on an interest bearing bank account was likely
> calculated using decimal arithmetic.

Even these days? Not likely, when their software is probably being
translated to C# or Java as we speak. Also, the interest needs to be
rounded off to the nearest penny. That's far more error for the typical
account than the precision of a 64-bit float. Those mainframes probably
didn't support 64-bit floats, which is why decimal arithmetic would be
useful.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Duane Rettig
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <44qgks1er.fsf@franz.com>
Rahul Jain <·····@nyct.net> writes:

> ············@reillyhayes.com writes:
> 
> > Decimal arithmetic is exceptionally useful for financial applications,
> > which is why it has been a feature of mainframe computers for decades.
> > The interest you earn on an interest bearing bank account was likely
> > calculated using decimal arithmetic.
> 
> Even these days? Not likely, when their software is probably being
> translated to C# or Java as we speak. Also, the interest needs to be
> rounded off to the nearest penny. That's far more error for the typical
> account than the precision of a 64-bit float. Those mainframes probably
> didn't support 64-bit floats, which is why decimal arithmetic would be
> useful.

Careful.  Since 1969 the IBM 260 architecture had up to 16-byte hardware
floating point.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Duane Rettig
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4zmycqjnf.fsf@franz.com>
Duane Rettig <·····@franz.com> writes:

> Rahul Jain <·····@nyct.net> writes:
> 
> > ············@reillyhayes.com writes:
> > 
> > > Decimal arithmetic is exceptionally useful for financial applications,
> > > which is why it has been a feature of mainframe computers for decades.
> > > The interest you earn on an interest bearing bank account was likely
> > > calculated using decimal arithmetic.
> > 
> > Even these days? Not likely, when their software is probably being
> > translated to C# or Java as we speak. Also, the interest needs to be
> > rounded off to the nearest penny. That's far more error for the typical
> > account than the precision of a 64-bit float. Those mainframes probably
> > didn't support 64-bit floats, which is why decimal arithmetic would be
> > useful.
> 
> Careful.  Since 1969 the IBM 260 architecture had up to 16-byte hardware
============================== 360
> floating point.

Slip of the finger :-(

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87vf90fbsn.fsf@nyct.net>
Duane Rettig <·····@franz.com> writes:

> Careful.  Since 1969 the IBM 260 architecture had up to 16-byte hardware
> floating point.

I should have guessed. :)

Maybe the issue was one of memory usage? xxx,xxx.xx is a typical maximum
for bank accounts (people with more probably tend to spread their money
into stuff like CDs, bonds, stocks, real estate, etc). That would
require 4*8 = 32 bits to represent as BCD. A 32-bit float would be
unacceptable for such calculations, especially when dealing with bank
accounts and not investment accounts. But 64-bit floats will be plenty,
as you only need to be precise to the nearest penny, and they compute
the interest off the average balance (at least they pretend to want you
to think that), which is probably rounded to the nearest penny, too.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Duane Rettig
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <4vf90qiew.fsf@franz.com>
Rahul Jain <·····@nyct.net> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Careful.  Since 1969 the IBM 260 architecture had up to 16-byte hardware
> > floating point.

[I already corrected the slip of my fingers where I had intended to say 360
instead of 260.]

> I should have guessed. :)

> Maybe the issue was one of memory usage? xxx,xxx.xx is a typical maximum
> for bank accounts (people with more probably tend to spread their money
> into stuff like CDs, bonds, stocks, real estate, etc). That would
> require 4*8 = 32 bits to represent as BCD. A 32-bit float would be
> unacceptable for such calculations, especially when dealing with bank
> accounts and not investment accounts. But 64-bit floats will be plenty,
> as you only need to be precise to the nearest penny, and they compute
> the interest off the average balance (at least they pretend to want you
> to think that), which is probably rounded to the nearest penny, too.

Nah, BCD is 5/8 as efficient as binary at storing numbers, though varying
width.

The issue was always accuracy and repeatability.  In this IEEE-754 era,
we can't imagine the horrible confusion that surrounded pre-1985 floating
point calculations, made by hardware claiming to be more accurate than
other hardware, and all the while what was really desired was repeatability.
Note that IBM floats had an interesting twist; their radix was 16, not
2, so the mantissa was grainier than a same-width mantissa for a float
representation which encoded a radix of 2.  This didn't necessarily
affect accuracy, but when I first ported Franz Lisp to the IBM 360
architecture, and then later when I joined Franz Inc and ported
Allegro CL to the same architecture, it was frustrating to see same
calculations give different results, just because of the radix.  And in
the 2-vs-16 radix situation, one is a power of the other, so the
translation difference is purely a binary shift.  Whereas in a 10-vs-16
or a 2-vs-16 situation, the translation is due to shifts and adds.

I won't argue with anyone about whether or not losing pennies is
important or not.  It is a question of perspective.  And having all
kinds of customers that lean both ways, it is impoortant to try to
accomodate both points of view.

To keep this light: wasn't it Superman 2 where the villain made his
millions by siphoning off the fractions of pennies from the round-off
errors in bank calculations? :-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Cameron MacKinnon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <nsCdnX4cV5S0WJbfRVn-qQ@golden.net>
Duane Rettig wrote:
> The issue was always accuracy and repeatability.  In this IEEE-754
> era, we can't imagine the horrible confusion that surrounded pre-1985
> floating point calculations, made by hardware claiming to be more
> accurate than other hardware, and all the while what was really
> desired was repeatability.

I've read of one area where this is still a problem: Computer animation.
At the big houses which have a mix of Intel and various RISC machines,
they've discovered that they can't distribute rendering jobs to run on
multiple manufacturers' hardware because they produce slightly different
answers and then the polygons don't stitch together or there's frame to
frame jitter.
From: Rahul Jain
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87is50f7w9.fsf@nyct.net>
Duane Rettig <·····@franz.com> writes:

> The issue was always accuracy and repeatability.  In this IEEE-754 era,
> we can't imagine the horrible confusion that surrounded pre-1985 floating
> point calculations, made by hardware claiming to be more accurate than
> other hardware, and all the while what was really desired was repeatability.

Yes, that's true. Portability was probably a real concern. The main
portability difference with general use of FP these days is Intel's
80-bit "bonus" internal precision. And that hardly has an influence on
the kind of math non-numerical-analysts need to do.

Note that one of the most popular libraries for performing interest
calculations (TIPS's implementation of SSCSL) uses floating point. In
fact, it gets different results in pure Java vs. JNI because of (I
assume) the 80-bit float issue.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Alan Crowe
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <86hdkj6o98.fsf@cawtech.freeserve.co.uk>
Duane Rettig <·····@franz.com> writes:
> Note that IBM floats had an interesting twist; their radix
> was 16, not 2, so the mantissa was grainier than a
> same-width mantissa for a float representation which
> encoded a radix of 2.

Twenty something years ago I wrote some 64 bit software
floating point routines for an 8 bit microprocessor, a 6809.
I suspected that performance was going to barely adequate
while accuracy would be ample.  To shift 4 bits for
renormalistion would require going through the mantissa
doing a single bit shift four times, and take four times the
time of the basic add with carry.  I decided to use 256 as
my floating point radix. That way I never had to bother with
shifting the bits within the bytes. Every so often I would
have to shift by 8 bits, but since it was an 8bit microprocess,
that was just moving the bytes around.

I guess it would have caused quite a shock if folk had seen

(float-radix 1.0d0) => 256

Alan Crowe
Edinburgh
Scotland
From: Greg Menke
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <m3y8dwy5n9.fsf@athena.pienet>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Cameron MacKinnon <··········@clearspot.net> writes:
> > Most programmers and most programs need reliable decimal numeric
> > manipulation more than they need fast approximate binary answers.
> 
> I don't believe this, either.

Don't underestimate the improvement in throughput you gain by making the
math finally work with a 10-fold increase in overhead.  Speeding it up
is lots easier than making it right.

Gregm
From: Christophe Rhodes
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <sqis51ij09.fsf@cam.ac.uk>
Cameron MacKinnon <··········@clearspot.net> writes:

> I submit that any number can be represented in a computer, in either
> n bits per digit or a small amount of code.

I'm no expert, but I don't believe this is actually true.  Isn't this
essentially the same issue as the Halting problem?

Christophe
From: Pascal Bourguignon
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <87ekfpa42t.fsf@thalassa.informatimago.com>
Cameron MacKinnon <··········@clearspot.net> writes:
> I submit that any number can be represented in a computer, in either n
> bits per digit or a small amount of code. 

I submit that G�del's Theorem proves you're wrong.


Beside, _any number_ covers a very large set, with things you don't
even imagine.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
Kitty like plastic.
Confuses for litter box.
Don't leave tarp around.
From: Coby Beck
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <IFvOd.31147$tU6.4096@edtnps91>
"Cameron MacKinnon" <··········@clearspot.net> wrote in message 
···························@golden.net...
> Most programmers and most programs need reliable decimal numeric
> manipulation more than they need fast approximate binary answers.

Can you please provide some kind of substantiation for this statement?

Simply saying it, no matter how often or passionately, does not make it 
true.


-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Thomas A. Russ
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <ymizmyg1b6y.fsf@sevak.isi.edu>
"Christophe Turle" <······@nospam.com> writes:

> 'human context' means humans in general. How many % of documents written by 
> humans which include 0.1111111 to be interpreted as a float ?

How many % of documents written by humans are compilable by any
compiler?

How many % of programs written by humans interpret 0.1111111 as a float?

As Kent so eloquently pointed out, you should not be using the Lisp
reader to read non-lisp input.  As much as this pains me to admit this,
I don't think you would have run into the same problem doing this in C
instead.  That is because in C you would have to write your own parser
from scratch and could thus have placed any interpretation you wanted on
the input string.  Just because Lisp makes reading input and parsing it
easier just makes it easier to misapply its tools.

In the time it takes to respond to all these messages, I bet you could
even write (or adapt from other sources) the reader macros you would
need to have the effects you desire.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <ufz06csou.fsf@nhplace.com>
···@sevak.isi.edu (Thomas A. Russ) writes:

> [...] As much as this pains me to admit this,
> I don't think you would have run into the same problem doing this in C
> instead.  That is because in C you would have to write your own parser
> from scratch and could thus have placed any interpretation you wanted on
> the input string.  Just because Lisp makes reading input and parsing it
> easier just makes it easier to misapply its tools.

True for C, but not C++, which follows Lisp's lead in providing a
bunch of fun parser tools to misapply.  In fact, it goes one step
better, since the tools are useless for doing program manipulation and
only useful for data exportation/importation.  So whereas Lisp has
tools that have a specific semantics for a specific purpose (the
language itself) that can be abused, C++ has tools that have a
specific semantics for no specific purpose (just in case you need that
particular semantics, I guess) but that can be similarly abused.

Ah, progress ... by any other name, would it smell as sweet?
From: Barry Wilkes
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <y8dyp601.fsf@acm.org>
Kent M Pitman <······@nhplace.com> writes:
>
> True for C, but not C++, which follows Lisp's lead in providing a
> bunch of fun parser tools to misapply.  In fact, it goes one step
> better, since the tools are useless for doing program manipulation and
> only useful for data exportation/importation.  So whereas Lisp has
> tools that have a specific semantics for a specific purpose (the
> language itself) that can be abused, C++ has tools that have a
> specific semantics for no specific purpose (just in case you need that
> particular semantics, I guess) but that can be similarly abused.
>
> Ah, progress ... by any other name, would it smell as sweet?

Kent,

Could you be a bit more specific about which C++ parsing tools you are
referring to here?  I'm not sure I understand the point you are making.

Barry.
From: Kent M Pitman
Subject: Re: demonic problem descriptions
Date: 
Message-ID: <u4qglbujv.fsf@nhplace.com>
Barry Wilkes <·······@acm.org> writes:

> Kent M Pitman <······@nhplace.com> writes:
> >
> > True for C, but not C++, which follows Lisp's lead in providing a
> > bunch of fun parser tools to misapply.  In fact, it goes one step
> > better, since the tools are useless for doing program manipulation and
> > only useful for data exportation/importation.  So whereas Lisp has
> > tools that have a specific semantics for a specific purpose (the
> > language itself) that can be abused, C++ has tools that have a
> > specific semantics for no specific purpose (just in case you need that
> > particular semantics, I guess) but that can be similarly abused.
> >
> > Ah, progress ... by any other name, would it smell as sweet?
> 
> Could you be a bit more specific about which C++ parsing tools you are
> referring to here?  I'm not sure I understand the point you are making.

Oh, I was just referring to things like the istream/ostream stuff.

You can make the case that they're useful for reading/writing data, 
certainly, but my point is that they could choose ANY notation for
so doing as long as it was invertible. They are logically dissociated
from the syntax of the language, and only accidentally in alignment
with it.  Yet they have a specific syntax built into them, which if you
deviate from, you'll get yourself into trouble.

Does that help?
From: Thomas A. Russ
Subject: Re: demonic problem descriptions [was Re: demonic numbers !]
Date: 
Message-ID: <ymi1xbs2q2a.fsf@sevak.isi.edu>
"Christophe Turle" <······@nospam.com> writes:

> I agree. And the context for Lisp should be compatible with human context. 
> default float interpretation is not compatible with human reading. That's my 
> point.

I disagree.  The context for Lisp should be compatible with the
programming language context.  It is, after all, a programming language
and it is more useful for it to conform to that environment.

> But as i said before, 0.1111 shouldn't be read as a 
> float in Lisp. It is bad design choice.

Opinions can reasonably differ on this.  In any case, the same design
choice has been made in just about every other programming language.  At
least if it is "bad", it has lots of company.  But that just reinforces
my argument above, that the behavior is what most people would expect
when approaching Lisp as a programming language.

But, this is likely to degenerate into a religious argument, so I
suppose we will just have to disagree on this evaluation.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Thomas A. Russ
Subject: Re: demonic problem descriptions [was Re: demonic numbers !]
Date: 
Message-ID: <ymi3bw82qbe.fsf@sevak.isi.edu>
Kent M Pitman <······@nhplace.com> writes:

> 
> "Christophe Turle" <······@nospam.com> writes:
> 
> > I can't change inputs. I have the string "(x 123.2569 y 1256.3588)" input 
> > for example. So i can't introduce reader macros :(
> 
> No, that's not so.  That is, the assertion in your first sentence is not,
> as you seem to suggest, the result of some logical implication from the
> premise in the second sentence to an unavoidable conclusion in the third
> sentence.
> 
> For example, you could make #\0, #\1, #\2, etc. be readmacros that
> could then read the rest of the number in whatever format they liked.

Actually, if one wants an example of something like this, the MEASURES
package from Roman Cunis does exactly this sort of thing in order to be
able to read items like 15m/s as a dimensioned number.

Here is a link to the CMU repository with that code:

   http://tinyurl.com/5lppy

It would give you a start on writing a reader macro such as Kent suggests.


-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Barry Margolin
Subject: Re: demonic numbers !
Date: 
Message-ID: <barmar-4D7E8B.01084304022005@comcast.dca.giganews.com>
In article <·······················@news.free.fr>,
 "Christophe Turle" <······@nospam.com> wrote:

> How to make CL (Clisp) read decimal numbers as rational ?
> 
> For example, i have to read one entry : 102.0051 but this is interpreted as 
> a float ! And i don't know how to write the function F such that :
> 
> (F entry) => "102005100" which in theory is just 'multiply by 1000000'.
> 
> If the entry was a double-float it works with rationalize, but i can't 
> change the inputs (and i don't want too, i don't want to know what the 
> underlying number representation is !)

RATIONALIZE should work with any type of float, not just double-float.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Pascal Bourguignon
Subject: Re: demonic numbers !
Date: 
Message-ID: <87k6pod3b5.fsf@thalassa.informatimago.com>
Barry Margolin <······@alum.mit.edu> writes:

> In article <·······················@news.free.fr>,
>  "Christophe Turle" <······@nospam.com> wrote:
> 
> > How to make CL (Clisp) read decimal numbers as rational ?
> > 
> > For example, i have to read one entry : 102.0051 but this is interpreted as 
> > a float ! And i don't know how to write the function F such that :
> > 
> > (F entry) => "102005100" which in theory is just 'multiply by 1000000'.
> > 
> > If the entry was a double-float it works with rationalize, but i can't 
> > change the inputs (and i don't want too, i don't want to know what the 
> > underlying number representation is !)
> 
> RATIONALIZE should work with any type of float, not just double-float.

That's not exactly the behavior we want here.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d? s++:++ a+ C+++ UL++++ P--- L+++ E+++ W++ N+++ o-- K- w--- 
O- M++ V PS PE++ Y++ PGP t+ 5+ X++ R !tv b+++ DI++++ D++ 
G e+++ h+ r-- z? 
------END GEEK CODE BLOCK------
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42051224$0$14161$626a14ce@news.free.fr>
"Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news: 
····························@comcast.dca.giganews.com...
> In article <·······················@news.free.fr>,
> "Christophe Turle" <······@nospam.com> wrote:
>
>> How to make CL (Clisp) read decimal numbers as rational ?
>>
>> For example, i have to read one entry : 102.0051 but this is interpreted 
>> as
>> a float ! And i don't know how to write the function F such that :
>>
>> (F entry) => "102005100" which in theory is just 'multiply by 1000000'.
>>
>> If the entry was a double-float it works with rationalize, but i can't
>> change the inputs (and i don't want too, i don't want to know what the
>> underlying number representation is !)
>
> RATIONALIZE should work with any type of float, not just double-float.


i'm afraid not (CLisp 2.33)

CL-USER> (rationalize 102.0051)
60081/589

CL-USER> (rationalize 102.0051d0)
1020051/10000


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87is566um1.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> "Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news: 
> ····························@comcast.dca.giganews.com...
>> RATIONALIZE should work with any type of float, not just double-float.
>
>
> i'm afraid not (CLisp 2.33)
>
> CL-USER> (rationalize 102.0051)
> 60081/589
>
> CL-USER> (rationalize 102.0051d0)
> 1020051/10000

I'm afraid so... Please explain how the above is incorrect.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <420601e8$0$1011$626a14ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news:
>> ····························@comcast.dca.giganews.com...
>>> RATIONALIZE should work with any type of float, not just double-float.
>>
>>
>> i'm afraid not (CLisp 2.33)
>>
>> CL-USER> (rationalize 102.0051)
>> 60081/589
>>
>> CL-USER> (rationalize 102.0051d0)
>> 1020051/10000
>
> I'm afraid so... Please explain how the above is incorrect.


Not sure to understand your reply there.

I didn't want to say that 'rationalize' behave incorrectly. I just showed 
that the input type of float matters.

-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <871xbt7uqz.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> I didn't want to say that 'rationalize' behave incorrectly. I just showed 
> that the input type of float matters.

Of course it does. That much is clear from the description of what it
should do. The issue is that what _you_ want to do is not what it does. 
It was suggested as a _possible_ way to solve whatever problem you were
having. The solution you seem to want is to be able to change the lisp
reader to not read lisp any more. Strange goal, if you ask me.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42062efb$0$1017$626a14ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> The solution you seem to want is to be able to change the lisp
> reader to not read lisp any more.
> Strange goal, if you ask me.

Lisp lives.

IF a lisp feature is missing or bad designed, i don't find strange to change 
lisp. It will just read NEW lisp.

Of course, you may not agree with the new feature :

 0.11111  => read as a rational
 0.11111f => read as a float

But that's an other issue ;)

-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <871xbol8bi.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> Of course, you may not agree with the new feature :
>
>  0.11111  => read as a rational
>  0.11111f => read as a float

No, I agree with it. The problem is that many people _can't_ _afford_ to
agree with it, and I don't want to lose the ability to share code and
implementation effort with them.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: ···············@yahoo.com
Subject: Re: demonic numbers !
Date: 
Message-ID: <1107536484.790098.191080@g14g2000cwa.googlegroups.com>
In your application, do you know for sure that the decimals are exact?
That is, if your file has 1.3333, are you sure it's 13333/10000 and not
4/3 ?

If so, follow Pascal's suggestion, or use something like
(defun meg (x)
  (round (* 1000000 x)))
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42051380$0$14101$626a14ce@news.free.fr>
<···············@yahoo.com> a �crit dans le message de news: 
························@g14g2000cwa.googlegroups.com...
> In your application, do you know for sure that the decimals are exact?
> That is, if your file has 1.3333, are you sure it's 13333/10000 and not
> 4/3 ?

yes i am.


> If so, follow Pascal's suggestion, or use something like
> (defun meg (x)
>  (round (* 1000000 x)))


CL-USER> (defun meg (x)
    (round (* 1000000 x)))

MEG

CL-USER> (meg 102.0051)
102005096
0.0

CL-USER> (meg 102.0051d0)
102005100
0.0d0


No, what i really want is a ->rational at the read time and without reader 
macros since i don't control inputs :(


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87ekfu6uhl.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> No, what i really want is a ->rational at the read time and without reader 
> macros since i don't control inputs :(

You _must_ _not_ _use_ _the_ _lisp_ _reader_, _then_, _because_ _you_
_are_ _not_ _reading_ _lisp_.

To make it clear: The lisp reader reads lisp code. What you want to read
is not lisp code. Therefore, you do not want to use the lisp reader.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <420600fd$0$956$626a14ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> "Christophe Turle" <······@nospam.com> writes:
>
>> No, what i really want is a ->rational at the read time and without 
>> reader
>> macros since i don't control inputs :(
>
> You _must_ _not_ _use_ _the_ _lisp_ _reader_, _then_, _because_ _you_
> _are_ _not_ _reading_ _lisp_.
>
> To make it clear: The lisp reader reads lisp code. What you want to read
> is not lisp code. Therefore, you do not want to use the lisp reader.


Yes, i know it's not lisp code. And this is in fact my problem, why IN lisp, 
.1111 is read as a float ? yes, the spec is the spec. But for this point, 
imho the spec is not good.

I can't satisfy myself thinking that in Lisp .1111 does not mean .1111 (from 
human readers point of view)

It's the same as telling you that in language Horrible1 HGF.FDS means .1111 
because internally the language uses compression algorithms so it is more 
efficient. Where is abstraction ?


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87wttl6fr0.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> Yes, i know it's not lisp code. And this is in fact my problem, why IN lisp, 
> .1111 is read as a float ? yes, the spec is the spec. But for this point, 
> imho the spec is not good.

As you've been told, this point was debated when the spec was being
drawn up, but making .1111 read as a rational would break too much
existing code.

> I can't satisfy myself thinking that in Lisp .1111 does not mean .1111 (from 
> human readers point of view)

Humans don't read code. Computers read code. :)

> It's the same as telling you that in language Horrible1 HGF.FDS means .1111 
> because internally the language uses compression algorithms so it is more 
> efficient. Where is abstraction ?

No it's not. What do you mean by abstraction? The details of the FP
implementation are abstracted. It is allowed to use any base, number of
exponent and mantissa bits, existence of negative zero, NaN(s),
infinities, gradual under/overflow, etc. The last few can't be
determined directly via standard operators, but the others allow a
decent degree of adaptability of code to different FP implementations.

In any case, FP numbers are an important part of computing. More
important than rational numbers, and that's not likely to change in the
future. Too much of the computation done needs to be fast and is based
on approximate, heuristic algorithms or inputs.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42063ac6$0$936$626a14ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> "Christophe Turle" <······@nospam.com> writes:
>
>> Yes, i know it's not lisp code. And this is in fact my problem, why IN 
>> lisp,
>> .1111 is read as a float ? yes, the spec is the spec. But for this point,
>> imho the spec is not good.
>
> As you've been told, this point was debated when the spec was being
> drawn up, but making .1111 read as a rational would break too much
> existing code.

Once again this old good reason (like with all in upper case) ;)

(let ((*read-rational-as* :float))
 (load "your-library") )


>> I can't satisfy myself thinking that in Lisp .1111 does not mean .1111 
>> (from
>> human readers point of view)
>
> Humans don't read code. Computers read code. :)

sorry, but humans read and write code a lot ;)

>> It's the same as telling you that in language Horrible1 HGF.FDS means 
>> .1111
>> because internally the language uses compression algorithms so it is more
>> efficient. Where is abstraction ?
>
> No it's not. What do you mean by abstraction? The details of the FP
> implementation are abstracted.

But the number implementation is not. In this case, you tell that numbers 
must be represented as float.

> Too much of the computation done needs to be fast and is based
> on approximate, heuristic algorithms or inputs.

You can (by compiler/macros) go from rational to float but you can't go the 
way back ! It's why it is better to read (by default) as rational and 
further optimize your code.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: jayessay
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3pszcj80d.fsf@rigel.goldenthreadtech.com>
"Christophe Turle" <······@nospam.com> writes:

> "Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
> ··············@nyct.net...
> > "Christophe Turle" <······@nospam.com> writes:
> >
> >> Yes, i know it's not lisp code. And this is in fact my problem, why IN 
> >> lisp,
> >> .1111 is read as a float ? yes, the spec is the spec. But for this point,
> >> imho the spec is not good.
> >
> > As you've been told, this point was debated when the spec was being
> > drawn up, but making .1111 read as a rational would break too much
> > existing code.
> 
> Once again this old good reason (like with all in upper case) ;)

So, if things were bent to your own idiosyncratic will, all
programming languages would interpret something like .1111 as
1111/10000, i.e., it should be ensured that they all have such
semantics with this sort of number representation.  Since none of them
currently do this, you believe that they should be changed and further
imply that citing "would break too much code" is insufficient reason
to not make the change.  Even if your idea made some sense (or had any
CogSci validity, which frankly it doesn't) you seem to believe that
making this change is more important than not breaking all existing
code.  It is no wonder if you are beginning to be viewed as a raving
lunatic.

> >> I can't satisfy myself thinking that in Lisp .1111 does not mean .1111 
> >> (from
> >> human readers point of view)
> >
> > Humans don't read code. Computers read code. :)
> 
> sorry, but humans read and write code a lot ;)

Only that subset known as "programmers".  Which is to say that the
only "human readers" involved here are programmers.  Now, other than
yourself, how many _programmers_ do you think would read .1111 as you
are suggesting they will?  In my experience the number is exactly 0.


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <4207ef51$0$495$626a14ce@news.free.fr>
"jayessay" <······@foo.com> a �crit dans le message de news: 
··············@rigel.goldenthreadtech.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "Rahul Jain" <·····@nyct.net> a �crit dans le message de news:
>> ··············@nyct.net...
>> > "Christophe Turle" <······@nospam.com> writes:
>> >
>> >> Yes, i know it's not lisp code. And this is in fact my problem, why IN
>> >> lisp,
>> >> .1111 is read as a float ? yes, the spec is the spec. But for this 
>> >> point,
>> >> imho the spec is not good.
>> >
>> > As you've been told, this point was debated when the spec was being
>> > drawn up, but making .1111 read as a rational would break too much
>> > existing code.
>>
>> Once again this old good reason (like with all in upper case) ;)
>
> So, if things were bent to your own idiosyncratic will, all
> programming languages would interpret something like .1111 as
> 1111/10000, i.e., it should be ensured that they all have such
> semantics with this sort of number representation.  Since none of them
> currently do this, you believe that they should be changed

no. I'm not interested in changing low-level languages.

> and further
> imply that citing "would break too much code" is insufficient reason
> to not make the change.  Even if your idea made some sense (or had any
> CogSci validity, which frankly it doesn't) you seem to believe that
> making this change is more important than not breaking all existing
> code.  It is no wonder if you are beginning to be viewed as a raving
> lunatic.

First, you and others telling "He don't care about compatibility" please 
read the posts of this thread. It is at least the third time that i say that 
all i want* is a special variable like *decimal-notation-as* defaulting to 
:rational :

so to be compatible, we can use :

(let ((*decimal-notation-as* :float))
  (read ...) )

So please, no more backward compatibility or language interface reasons.

>> >> I can't satisfy myself thinking that in Lisp .1111 does not mean .1111
>> >> (from
>> >> human readers point of view)
>> >
>> > Humans don't read code. Computers read code. :)
>>
>> sorry, but humans read and write code a lot ;)
>
> Only that subset known as "programmers".  Which is to say that the
> only "human readers" involved here are programmers.  Now, other than
> yourself, how many _programmers_ do you think would read .1111 as you
> are suggesting they will?  In my experience the number is exactly 0.

Our experience are not the same...


* : what i think should be a cool add to Lisp implementation. But with 
william Bland trick, and that's what is wonderful with Lisp, i don't have to 
convince all others nor to wait for them to make it work for me. Only future 
will tell who was right.

I think we have close the circle for now, so thx to all who participated in 
this thread.

-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: jayessay
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3hdknj8me.fsf@rigel.goldenthreadtech.com>
"Christophe Turle" <······@nospam.com> writes:

> "jayessay" <······@foo.com> a �crit dans le message de news: 
> > Only that subset known as "programmers".  Which is to say that the
> > only "human readers" involved here are programmers.  Now, other than
> > yourself, how many _programmers_ do you think would read .1111 as you
> > are suggesting they will?  In my experience the number is exactly 0.
> 
> Our experience are not the same...

Which only shows that your extrapolations are either simply flawed
or outright megalomanical.


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Cesar Rabak
Subject: Re: demonic numbers !
Date: 
Message-ID: <420C1D5B.1030907@acm.org>
Christophe Turle escreveu:
> "jayessay" <······@foo.com> a �crit dans le message de news: 
> ··············@rigel.goldenthreadtech.com...
> 
>>"Christophe Turle" <······@nospam.com> writes:
[snipped]

> 
> First, you and others telling "He don't care about compatibility" please 
> read the posts of this thread. It is at least the third time that i say that 
> all i want* is a special variable like *decimal-notation-as* defaulting to 
> :rational :

And you still don't explain _why_. If your problem is to have data in 
MHz from some PDF file treated by your software just for printing, why 
not treat the string as a string, and leave out all the contorsion?

Just my .019999...

--
Cesar Rabak
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <420d47f7$0$531$636a15ce@news.free.fr>
"Cesar Rabak" <······@acm.org> a �crit dans le message de news: 
················@acm.org...
> Christophe Turle escreveu:
>> "jayessay" <······@foo.com> a �crit dans le message de news: 
>> ··············@rigel.goldenthreadtech.com...
>>
>>>"Christophe Turle" <······@nospam.com> writes:
> [snipped]
>
>>
>> First, you and others telling "He don't care about compatibility" please 
>> read the posts of this thread. It is at least the third time that i say 
>> that all i want* is a special variable like *decimal-notation-as* 
>> defaulting to :rational :
>
> And you still don't explain _why_. If your problem is to have data in MHz 
> from some PDF file treated by your software just for printing, why not 
> treat the string as a string, and leave out all the contorsion?

Because they are numbers. And i may have to do some comparisons to order 
them for example. I don't see why my program doesn't have the rights to know 
what it is manipulating. Surely this indirectly comes from these damned 
limitation theorems.

If i want to tell my program that numbers it will read with a decimal point 
are exact decimal numbers, i want to be able to express it simply*

If i think it is a default behavior, i want to be able to specify it simply*

* simply : don't have to recode the reader.

Some here suggest to do a specific parser since this is specific application 
data. For me, specific parsers have to be built in top of cl reader (except 
when strange syntax is encountered)

("errrr" 1125.3653254654654 "ed" 1145 254.25) should be parsed easily by cl 
reader. The fact that 1125.3653254654654 will be read as float by default 
causes trouble since i can't get back (with simple-float) the number i 
wanted to read.

With suggestions given here it seems there's a standard easy way to had the 
rational parsing. It is just surprising that no cl implementation offers 
this facilty.

so, from a practical point of view this is solved (thx to william bland). 
The usefulness will be (or not) demonstrated by time.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Cesar Rabak
Subject: Re: demonic numbers !
Date: 
Message-ID: <420EB30E.40905@acm.org>
Christophe Turle escreveu:
> "Cesar Rabak" <······@acm.org> a �crit dans le message de news: 
> ················@acm.org...
> 
>>Christophe Turle escreveu:
>>
>>>"jayessay" <······@foo.com> a �crit dans le message de news: 
>>>··············@rigel.goldenthreadtech.com...
>>>
>>>
>>>>"Christophe Turle" <······@nospam.com> writes:
>>>
>>[snipped]
>>
>>
>>>First, you and others telling "He don't care about compatibility" please 
>>>read the posts of this thread. It is at least the third time that i say 
>>>that all i want* is a special variable like *decimal-notation-as* 
>>>defaulting to :rational :
>>
>>And you still don't explain _why_. If your problem is to have data in MHz 
>>from some PDF file treated by your software just for printing, why not 
>>treat the string as a string, and leave out all the contorsion?
> 
> 
> Because they are numbers. 

Well. . . they are frequencies printed in some PDF file, aren't they?

> And i may have to do some comparisons to order 
> them for example. I don't see why my program doesn't have the rights to know 
> what it is manipulating. Surely this indirectly comes from these damned 
> limitation theorems.

OK. As strings you can do a lot of manipulation, specially sorting and 
comparing as well.

> 
> If i want to tell my program that numbers it will read with a decimal point 
> are exact decimal numbers, i want to be able to express it simply*
> 
> If i think it is a default behavior, i want to be able to specify it simply*
> 
> * simply : don't have to recode the reader.

As have been discussed in thread, computer representation of decimal 
numbers are not 'exact' except if you resort to specific schemes or as I 
suggest keep them represented in strings.

> 
> Some here suggest to do a specific parser since this is specific application 
> data. For me, specific parsers have to be built in top of cl reader (except 
> when strange syntax is encountered)
> 
> ("errrr" 1125.3653254654654 "ed" 1145 254.25) should be parsed easily by cl 
> reader. The fact that 1125.3653254654654 will be read as float by default 
> causes trouble since i can't get back (with simple-float) the number i 
> wanted to read.
> 
> With suggestions given here it seems there's a standard easy way to had the 
> rational parsing. It is just surprising that no cl implementation offers 
> this facilty.
> 
> so, from a practical point of view this is solved (thx to william bland). 
> The usefulness will be (or not) demonstrated by time.
> 
This is the important part!

Regards,

--
Cesar Rabak
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87r7jabn9e.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> For me, specific parsers have to be built in top of cl reader (except 
> when strange syntax is encountered)

Your manager is a lunatic.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Barry Margolin
Subject: Re: demonic numbers !
Date: 
Message-ID: <barmar-AB359E.08341806022005@comcast.dca.giganews.com>
In article <·······················@news.free.fr>,
 "Christophe Turle" <······@nospam.com> wrote:

> Yes, i know it's not lisp code. And this is in fact my problem, why IN lisp, 
> .1111 is read as a float ? yes, the spec is the spec. But for this point, 
> imho the spec is not good.

Why shouldn't it read it as a float?  That's the way many dialects of 
Lisp have been reading and writing floats for decades, and Common Lisp 
was designed to be compatible with preceding dialects like Maclisp where 
feasible.

It's also consistent with most other programming languages -- numbers 
with embedded decimal points are floating point.

Sure, we could have made .1111 read as a rational, and require you to 
write something like .1111e0 to get a float, but why should we have?  We 
provided another syntax for rationals that didn't conflict with existing 
practice -- if you want a rational, type 1111/10000.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42064250$0$961$626a14ce@news.free.fr>
"Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news: 
····························@comcast.dca.giganews.com...
> In article <·······················@news.free.fr>,
> "Christophe Turle" <······@nospam.com> wrote:
>
>> Yes, i know it's not lisp code. And this is in fact my problem, why IN 
>> lisp,
>> .1111 is read as a float ? yes, the spec is the spec. But for this point,
>> imho the spec is not good.
>
> Why shouldn't it read it as a float?  That's the way many dialects of
> Lisp have been reading and writing floats for decades, and Common Lisp
> was designed to be compatible with preceding dialects like Maclisp where
> feasible.

Just toggle parsing behavior to keep these damned compatibilies.

> It's also consistent with most other programming languages -- numbers
> with embedded decimal points are floating point.

Just toggle parsing behavior to read all these low-level languages inputs.

> Sure, we could have made .1111 read as a rational, and require you to
> write something like .1111e0 to get a float, but why should we have?  We
> provided another syntax for rationals that didn't conflict with existing
> practice -- if you want a rational, type 1111/10000.

Because these practices are bad. And i don't want to write 1111/10000 
instead of .1111, it is not human conventional. It is just implementation 
details.

Of course we can write rationals this way. We can also program in assembler 
all applications. After all, it was THE practice some years ago.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr)
From: Christophe Rhodes
Subject: Re: demonic numbers !
Date: 
Message-ID: <sqlla1irxf.fsf@cam.ac.uk>
"Christophe Turle" <······@nospam.com> writes:

> Because these practices are bad. And i don't want to write 1111/10000 
> instead of .1111, it is not human conventional. It is just implementation 
> details.

This is just silly, now.  In my daily work, if I write 0.1111, I imply
something different from if I write 1111/10000, and the community with
which I communicate understands this distinction.  (It isn't quite the
same distinction as the distinction that the default Lisp reader
draws, but it is prety close.)  You seem to be implying that your
personal vocabulary is somehow the unique "human convention" -- and,
I'm sorry to have to tell you, it isn't.

Christophe
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42064cff$0$951$626a14ce@news.free.fr>
"Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
··············@cam.ac.uk...
> "Christophe Turle" <······@nospam.com> writes:
>
>> Because these practices are bad. And i don't want to write 1111/10000
>> instead of .1111, it is not human conventional. It is just implementation
>> details.
>
> This is just silly, now.  In my daily work, if I write 0.1111, I imply
> something different from if I write 1111/10000, and the community with
> which I communicate understands this distinction.  (It isn't quite the
> same distinction as the distinction that the default Lisp reader
> draws, but it is prety close.)  You seem to be implying that your
> personal vocabulary is somehow the unique "human convention" -- and,
> I'm sorry to have to tell you, it isn't.

it is.

Your community is not human community it is a subset of it.

want a proof ?

> "if I write 0.1111, I imply something different from if I write 
> 1111/10000"

I bet that the vast majority of humans don't agree with your sentence and 
will tell you the contrary. When they write 0.1111 they mean 1111/10000.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: David Steuber
Subject: Re: demonic numbers !
Date: 
Message-ID: <87hdkp5m8u.fsf@david-steuber.com>
"Christophe Turle" <······@nospam.com> writes:

> "Christophe Rhodes" <·····@cam.ac.uk> a écrit dans le message de news: 
> ··············@cam.ac.uk...
> 
> > "if I write 0.1111, I imply something different from if I write 
> > 1111/10000"
> 
> I bet that the vast majority of humans don't agree with your sentence and 
> will tell you the contrary. When they write 0.1111 they mean 1111/10000.

I'll provide a data point.  I agree with Christophe.  The following
REPL interaction is what I would expect from a machine:

CL-USER> (* 1.0d0 1/9)
0.1111111111111111D0
CL-USER> (* 1.0 1/9)
0.11111111
CL-USER> 0.1111
0.1111
CL-USER> (* 9 0.1111)
0.99990004
CL-USER> (* 9 1/9)
1
CL-USER> (* 9.0d0 1/9)
1.0D0
CL-USER> (* 9.0 1/9)
1.0
CL-USER> (* 9.0d0 0.1111d0)
0.9999D0

OpenMCL 0.14.2-p1 on Darwin.

-- 
An ideal world is left as an excercise to the reader.
   --- Paul Graham, On Lisp 8.1
From: David Steuber
Subject: Re: demonic numbers !
Date: 
Message-ID: <1107748177.101254.82040@g14g2000cwa.googlegroups.com>
David Steuber wrote:
> "Christophe Turle" <······@nospam.com> writes:
>
> > "Christophe Rhodes" <·····@cam.ac.uk> a écrit dans le message de
news:
> > ··············@cam.ac.uk...
> >
> > I bet that the vast majority of humans don't agree with your
sentence and
> > will tell you the contrary. When they write 0.1111 they mean
1111/10000.
>
> I'll provide a data point.  I agree with Christophe.

Ack!  I ment C. Rhodes.
From: Gareth McCaughan
Subject: Re: demonic numbers !
Date: 
Message-ID: <87vf95mat6.fsf@g.mccaughan.ntlworld.com>
David Steuber wrote:

> "Christophe Turle" <······@nospam.com> writes:
> 
>> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
>> ··············@cam.ac.uk...
...
> I'll provide a data point.  I agree with Christophe.

In most contexts, that would be sufficient to indicate which
side you're on. However, the two disagreeing posters quoted
above are Christophe Turle and Christophe Rhodes. :-)

(Yes, it *was* clear from what you then went on to say...)

-- 
Gareth McCaughan
.sig under construc
From: Coby Beck
Subject: Re: demonic numbers !
Date: 
Message-ID: <7JvNd.14580$tU6.4781@edtnps91>
"Christophe Turle" <······@nospam.com> wrote in message 
····························@news.free.fr...
> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
> ··············@cam.ac.uk...
>> draws, but it is prety close.)  You seem to be implying that your
>> personal vocabulary is somehow the unique "human convention" -- and,
>> I'm sorry to have to tell you, it isn't.
>
> it is.
>

Hi, Christopher, sorry to be "ganging up" on you here but you appear to be 
using mostly argument by consensus here so I will just add that I too, am 
human and do not agree that .1111 means 1111/10000 by default.

> Your community is not human community it is a subset of it.

> want a proof ?

This i would like to see!
But an interesting experiment might be a multiple choice question you could 
give to different sets of people:

What does .1111 mean?

a) .1111d0
b) 1111/10000
c) 1.111x10e-4
d) all of the above

I'd wager that the vast majority of groups polled would select d or not 
understand the question.  Of those who understood the notational differences 
you would get a split between d and "can't answer this trick question, 
sorry"

>> "if I write 0.1111, I imply something different from if I write 
>> 1111/10000"
>
> I bet that the vast majority of humans don't agree with your sentence and 
> will tell you the contrary. When they write 0.1111 they mean 1111/10000.

If you stop and take a breath, I am sure you will agree that most people do 
not understand any distinction.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: David Steuber
Subject: Re: demonic numbers !
Date: 
Message-ID: <1107747093.500732.154910@l41g2000cwc.googlegroups.com>
Coby Beck wrote:
>
> What does .1111 mean?
>
> a) .1111d0
> b) 1111/10000
> c) 1.111x10e-4
> d) all of the above

Certainly not C, although I might buy 1.111x10e-1 (or 1.111e-1) and
therefor also not D.

Being pedantic, I would also discount A because a single float is not
really the same as a double float.  B looses out not just for reasons
of type, but also lack of equality:

CL-USER> (equal 0.1111 1111/10000)
NIL

Then again, A suffers the same problem:

CL-USER> (equal 0.1111 0.1111d0)
NIL
CL-USER> (= 0.1111 0.1111d0)
NIL

(both ways just to be sure)

> I'd wager that the vast majority of groups polled would select d or
not
> understand the question.  Of those who understood the notational
differences
> you would get a split between d and "can't answer this trick
question,
> sorry"

Trick question is probably the right answer.

If I obtained 0.1111 as a measurement, then the number is most
definitely aproximate.  That is the typical use of floating point.  If
that number was intended to be an exact fraction of a dollar, then you
are already likely to be in trouble.  This is not the way to count
money!

At least 1111/10000 is unambiguously exact.

Just for grins, I tried some stuff in Python.  For all the hype, I
would say that this language has its issues as well, not the least of
which is the apparant inability to express the size of the float (it
appears to take doubles by default):

$ python
Python 2.3 (#1, Sep 13 2003, 00:49:11)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1111 * 9
0.99990000000000001
>>> 1/2
0

Gee, not only no rationals, but it divides as C does!
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42068dc9$0$1007$626a14ce@news.free.fr>
"Coby Beck" <·····@mercury.bc.ca> a �crit dans le message de news: 
····················@edtnps91...
>
> Hi, Christopher, sorry to be "ganging up" on you here

Don't be sorry. Each one expresses his point of view. No trouble with that 
;)

> but you appear to be using mostly argument by consensus here so I will 
> just add that I too, am human and do not agree that .1111 means 1111/10000 
> by default.
>
>> Your community is not human community it is a subset of it.
>
>> want a proof ?
>
> This i would like to see!
> But an interesting experiment might be a multiple choice question you 
> could give to different sets of people:

ok.

> What does .1111 mean?
>
> a) .1111d0
> b) 1111/10000
> c) 1.111x10e-4
> d) all of the above
>
> I'd wager that the vast majority of groups polled would select d or not 
> understand the question.
>  Of those who understood the notational differences you would get a split 
> between d and "can't answer this trick question, sorry"

They will all understand the question. But the vast majority will not 
understand a) and less c) but all will agree that b) is potentially true (if 
not completly true).

So what should be the default interpretation ?

 b) seems the reasonable choice.

An other thing : if one has to write on a paper the rational 1111/10 they 
will write 111.1 and not 1111/10 (except when doing math). Did you ever see 
this notation used for prices, temperature ?


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Coby Beck
Subject: Re: demonic numbers !
Date: 
Message-ID: <QDxNd.14639$tU6.2120@edtnps91>
"Christophe Turle" <······@nospam.com> wrote in message 
·····························@news.free.fr...
> "Coby Beck" <·····@mercury.bc.ca> a �crit dans le message de news: 
> ····················@edtnps91...
>> What does .1111 mean?
>>
>> a) .1111d0
>> b) 1111/10000
>> c) 1.111x10e-4
>> d) all of the above
>>
>> I'd wager that the vast majority of groups polled would select d or not 
>> understand the question.
>>  Of those who understood the notational differences you would get a split 
>> between d and "can't answer this trick question, sorry"
>
> They will all understand the question. But the vast majority will not 
> understand a) and less c) but all will agree that b) is potentially true 
> (if not completly true).
>
> So what should be the default interpretation ?
>
> b) seems the reasonable choice.

Well, you would leave the decision to people who do not understand the 
question.  This seems doomed to fail, regardless of the consensus you might 
achieve.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87oeesjt7b.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> An other thing : if one has to write on a paper the rational 1111/10 they 
> will write 111.1 and not 1111/10 (except when doing math). Did you ever see 
> this notation used for prices, temperature ?

Computers do math. They are _computing_ machines, particularly when it
comes to numbers. Most importantly, they are mostly used for engineering
or financial math. Small rounding differences are insignificant compared
to speed in those applications. If my calculation says to use 153.222m
instead of 153.221m of wire on a bridge, is that a big deal? If my
calculation tells me to buy some stock at 15.53 instead of 15.531 is
that a big deal? The important thing, especially for the financial
situation, is to get the result _fast_ so that the people doing the work
can get on with it and make money.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Pascal Bourguignon
Subject: Re: demonic numbers !
Date: 
Message-ID: <8765109w03.fsf@thalassa.informatimago.com>
Rahul Jain <·····@nyct.net> writes:

> "Christophe Turle" <······@nospam.com> writes:
> 
> > An other thing : if one has to write on a paper the rational 1111/10 they 
> > will write 111.1 and not 1111/10 (except when doing math). Did you ever see 
> > this notation used for prices, temperature ?
> 
> Computers do math. 

Not unless you're using Maxima or Mathematica, or you're doing and
exercice implementing some symbolic derivation algorithm in lisp.

Computer do what Turing Machines do: they shift states.


> They are _computing_ machines, particularly when it
> comes to numbers. 

Not really.  You can do statistics:


[···@thalassa sourceforge]$ ls
./            cl-ldap/  cljl/            contraband/  mel-logic/    weird-irc/
../           cl-pdb/   clocc/           garnetlisp/  mobile-lisp/  wilbur-rdf/
Makefile      cl-sql/   clops/           langband/    mystica/      wmbot/
albert/       claim/    clore/           lavlet/      robocells/
cclan/        clapps/   cltte/           lift/        sakala/
chess2001/    claws/    com-lisp-utils/  lisa/        sase/
cl-cookbook/  clicc/    commonmusic/     matlisp/     series/

[···@thalassa sourceforge]$ find . -type f -name \*.lisp | while read f ; do \
    sed -e 's/  ··@/g'<$f|tr ·@' '\012'; done > /tmp/crude
[···@thalassa sourceforge]$ grep -c -E '\([-+*/]' /tmp/crude
28199
[···@thalassa sourceforge]$ grep -c -E '\([^-+*/]' /tmp/crude
724932

Less than 4% of functions used are arithmetic operators. (The other
mathematical functions are orders of magnitude less used).


[···@thalassa sourceforge]$ grep -c -E '^[-+]?[0-9]+(\.[0-9]*)?([defgDEFG][-+]?)?$' /tmp/crude 
59630
[···@thalassa sourceforge]$ grep -c -E '[^("]' /tmp/crude 
2779784

Less than 2% of the non-function, non-string tokens are numbers.

For more fun, try to disassemble the programs you've got on your
system and count the number of floating point and integer arithmetic
instructions vs. the other instructions.





I guess you've got a STRONG ENIAC influence.  Yes, ENIAC did only
compute numbers.  But my Macintosh never computed a lot of number; it
computed a lot more pixels than numbers.

Remember, computers are not calculettes, they are _ordinateurs_!

Ordinator \Or"di*na`tor\, n. [L.]
     One who ordains or establishes; a director. [R.] --T. Adams.




-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

In a World without Walls and Fences, 
who needs Windows and Gates?
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <878y5wgr1n.fsf@nyct.net>
Pascal Bourguignon <····@mouse-potato.com> writes:

> Rahul Jain <·····@nyct.net> writes:
>
>> "Christophe Turle" <······@nospam.com> writes:
>> 
>> > An other thing : if one has to write on a paper the rational 1111/10 they 
>> > will write 111.1 and not 1111/10 (except when doing math). Did you ever see 
>> > this notation used for prices, temperature ?
>> 
>> Computers do math. 
>
> Not unless you're using Maxima or Mathematica, or you're doing and
> exercice implementing some symbolic derivation algorithm in lisp.
>
> Computer do what Turing Machines do: they shift states.

As Mr. Turle would say, you have to take the context into account. When
dealing with numbers, they do (my bad) _arithmetic_.

And I'm not talking about idealized computing machines here, I'm talking
about the computers we actually have. Also, notation used for prices and
temperature are the same, but should be dealt with very differently
inside the computer.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Wade Humeniuk
Subject: Re: demonic numbers !
Date: 
Message-ID: <GpBNd.11618$gA4.9436@edtnps89>
Coby Beck wrote:

> 
> This i would like to see!
> But an interesting experiment might be a multiple choice question you could 
> give to different sets of people:
> 
> What does .1111 mean?
> 
> a) .1111d0
> b) 1111/10000
> c) 1.111x10e-4
> d) all of the above
> 
>

Actually I see that it is meant as .1111... (repeating
decimal)

which is 1/9

Wade
From: Coby Beck
Subject: Re: demonic numbers !
Date: 
Message-ID: <f3CNd.9248$K54.3745@edtnps84>
"Wade Humeniuk" <··················@telus.net> wrote in message 
·························@edtnps89...
> Coby Beck wrote:
>> This i would like to see!
>> But an interesting experiment might be a multiple choice question you 
>> could give to different sets of people:
>>
>> What does .1111 mean?
>>
>> a) .1111d0
>> b) 1111/10000
>> c) 1.111x10e-4
>> d) all of the above
>>
>
> Actually I see that it is meant as .1111... (repeating
> decimal)
>
> which is 1/9

Darn!  I knew there were more choices I was going to put in before I got to 
writing it.  And with David Steuber's correction, the question should read:

What does .1111 mean?

a) .1111d0
b) 1111/10000
c) 1.111x10e-1
d) 1/9
e) all of the above

But you get the point...

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Wade Humeniuk
Subject: Re: demonic numbers !
Date: 
Message-ID: <BEMNd.19496$gA4.7628@edtnps89>
Coby Beck wrote:

> Darn!  I knew there were more choices I was going to put in before I got to 
> writing it.  And with David Steuber's correction, the question should read:
> 
> What does .1111 mean?
> 
> a) .1111d0
> b) 1111/10000
> c) 1.111x10e-1
> d) 1/9
> e) all of the above
> 
> But you get the point...
> 

Yup, I get the point.  What the person MEANT is dependent on the context
in which it was expressed.  .1111 could mean in a computer programing
context a binary fraction.  In business it could mean 11.11%, compounded
anually.  I cannot think of a situation of where .1111 means 1111/10000
unless it is an artificial human number like an interest rate or
a quick approximation for 1/9 when using a hand held calculator.
Computers make honest men of us all by forcing us to say
exactly what we mean.

A number like that might show up in  physics or chemistry, but there it
means .1111+/-.00005  To leave off the uncertainty there is to
entertain disaster.

Wade
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <YcidnfgT54uDOJrfRVn-3w@golden.net>
Wade Humeniuk wrote:
> Coby Beck wrote:
> 
>>
>> This i would like to see!
>> But an interesting experiment might be a multiple choice question you 
>> could give to different sets of people:
>>
>> What does .1111 mean?
>>
>> a) .1111d0

This is nonsensical except to some computer programmers. Among the 
broader numerate population, the letter d never appears in the middle of 
a number.

>> b) 1111/10000

Everyone is taught in primary school that 1111/10000 and 0.1111 are the 
same number. Lispers associate fractions with exactitude, but this is 
certainly not the case among the broader public.

>> c) 1.111x10e-4

...and they learn that scientific notation is used as a terse notation 
for numbers whose significant digits lie far from the decimal point. The 
notation implies a physical constant or measurement, which further 
implies limited accuracy and finite error.

>> d) all of the above

Programmers tend to make artificial distinctions between the above 
number formats, because our languages overload the notation to choose 
between various internal representations, something non programmers 
don't know about.

> Actually I see that it is meant as .1111... (repeating
> decimal)
> 
> which is 1/9

This is reasonable, but I would have lost marks in elementary school if 
I'd said that 0.1111 and 0.1111... were equivalent. Further, and this is 
key, I don't think casual users and the nearly innumerate would be 
surprised if they punched in 0.1111 and the computer used that number 
exactly (instead of guessing that 1/9 was desired) for their 
calculation. Even if they WERE surprised, there's a logical and 
intuitive way to hint to the machine that more digits are required -- 
just add more digits.

Of course, I don't think there's anything stopping a conforming Lisp 
from storing 0.1111 as 3^-2 so people writing portable code shouldn't 
depend on it being otherwise.


The quiz writer didn't offer the choice of 7455795/67108864. I wonder 
why not?

The whole quiz is bogus. It is attempting to enlighten us on the 
question "what should a computer assume about a number, absent context?" 
A better question to investigate might be "what can a computer discover 
about a number, given context?" Might we not use hints such as whether 
the number is involved in indexing, whether it is converted to base 10 
(i.e. printed) and with what precision?
From: Barry Margolin
Subject: Re: demonic numbers !
Date: 
Message-ID: <barmar-362415.21232406022005@comcast.dca.giganews.com>
In article <·······················@news.free.fr>,
 "Christophe Turle" <······@nospam.com> wrote:

> I bet that the vast majority of humans don't agree with your sentence and 
> will tell you the contrary. When they write 0.1111 they mean 1111/10000.

I think this argument is pretty silly.  What they intend probably 
depends on context.

If you say that the normal human body temperature is 98.6 degrees, you 
don't really mean (if you know what you're talking about) it's exactly 
98.6000000 -- it's *about* 98.6.  So a numerical system that deals with 
approximations will generally work fine.

But if you're dealing with money, and you pay a bill for $98.60, you 
would be very surprised if $98.61 were taken out of your checking 
account.

Floating point works OK in scientific applications, where the readings 
that come from measurement devices are inherently noisy.  Although what 
you really might want is an arithmetic system that automatically 
incorporates margins of error, e.g. 100.1+/-.05 + 51.3+/-.05 = 
151.4+/-.1.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <4207e542$0$492$626a14ce@news.free.fr>
"Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news: 
····························@comcast.dca.giganews.com...
> In article <·······················@news.free.fr>,
> "Christophe Turle" <······@nospam.com> wrote:
>
>> I bet that the vast majority of humans don't agree with your sentence and
>> will tell you the contrary. When they write 0.1111 they mean 1111/10000.
>
> I think this argument is pretty silly.  What they intend probably
> depends on context.
>
> If you say that the normal human body temperature is 98.6 degrees, you
> don't really mean (if you know what you're talking about) it's exactly
> 98.6000000 -- it's *about* 98.6.  So a numerical system that deals with
> approximations will generally work fine.
>
> But if you're dealing with money, and you pay a bill for $98.60, you
> would be very surprised if $98.61 were taken out of your checking
> account.

Even if temperature are not exact, i'm curious to hear what a end-user will 
say if after he has entered 98.6, The show window displays "98.60001" ...

> Floating point works OK in scientific applications, where the readings
> that come from measurement devices are inherently noisy.  Although what
> you really might want is an arithmetic system that automatically
> incorporates margins of error, e.g. 100.1+/-.05 + 51.3+/-.05 =
> 151.4+/-.1.

sincerely : no. But it may be usefull for others.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Coby Beck
Subject: Re: demonic numbers !
Date: 
Message-ID: <2eVNd.24937$tU6.11363@edtnps91>
"Christophe Turle" <······@nospam.com> wrote in message 
····························@news.free.fr...
> Even if temperature are not exact, i'm curious to hear what a end-user 
> will say if after he has entered 98.6, The show window displays "98.60001" 
> ...

Oh, probably the same thing he would say if the display showed "986/10", no?
;)

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <4209c496$0$26811$626a14ce@news.free.fr>
"Coby Beck" <·····@mercury.bc.ca> a �crit dans le message de news: 
·····················@edtnps91...
>
> "Christophe Turle" <······@nospam.com> wrote in message 
> ····························@news.free.fr...
>> Even if temperature are not exact, i'm curious to hear what a end-user 
>> will say if after he has entered 98.6, The show window displays 
>> "98.60001" ...
>
> Oh, probably the same thing he would say if the display showed "986/10", 
> no?
> ;)
>


It was implictly implied that if i use one notation as the reading one, it 
should also be used as the writting one. so It should be displayed as 
"98.6". Of course some pretty-printer variables could change this behavior.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Coby Beck
Subject: Re: demonic numbers !
Date: 
Message-ID: <aRtOd.42989$gA4.37481@edtnps89>
"Christophe Turle" <······@nospam.com> wrote in message 
······························@news.free.fr...
> "Coby Beck" <·····@mercury.bc.ca> a �crit dans le message de news: 
> ·····················@edtnps91...
>>
>> "Christophe Turle" <······@nospam.com> wrote in message 
>> ····························@news.free.fr...
>>> Even if temperature are not exact, i'm curious to hear what a end-user 
>>> will say if after he has entered 98.6, The show window displays 
>>> "98.60001" ...
>>
>> Oh, probably the same thing he would say if the display showed "986/10", 
>> no?
>> ;)
>>
> It was implictly implied that if i use one notation as the reading one, it 
> should also be used as the writting one. so It should be displayed as 
> "98.6". Of course some pretty-printer variables could change this 
> behavior.

Christophe, you are missing my point again.  98.6 as user input is four 
chars and nothing else absent application specific information.  Your 
example of a bad window display was a reference to the internal 
representation of the number lisp would read from that input.  My example 
was meant to refer to the internal representation you wish lisp read from 
that input.  And in fact it was a kinder presentation (by oversight):

CL-USER 10 > (format t "Your temperature is ~A" 493/5)
Your temperature is 493/5

The point you are missing is that it is bad programming practice to just 
dump raw data on a user's interface.  Since this situation is about 
temperatures, you should use "~,1F" (do not mess around with the 
pretty-printer for this).  It is the same bad mistake to just parse raw data 
with no attention to what it really is.

That is also the larger point of this whole debate, numbers in a data file 
are NOT floats or double floats or rationals or integers.  They are 
frequencies or dewey-decimal labels or temperatures or money or SSN's or 
distances etc)

And while I may agree that the vast majority of naive users assume typing 
98.6 means 98.600000000000000000000000000000000000000000... to the computer 
(because computers are real smart) I daresay the vast majority of decimal 
numbers in data files are NOT and NEVER WERE the result of computations with 
rationals.

-- 
Coby Beck
(remove #\Space "coby 101 @ big pond . com")
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <ssOdnU7Ts5747pffRVn-1A@golden.net>
Coby Beck wrote:
> And while I may agree that the vast majority of naive users assume
> typing 98.6 means 98.600000000000000000000000000000000000000000... to
> the computer (because computers are real smart) I daresay the vast
> majority of decimal numbers in data files are NOT and NEVER WERE the
> result of computations with rationals.

This is a difficult argument, but I'd say you're wrong. If you look at
the world's largest databases, they're rationals. Filthy lucre accounts
for numbers expressed in 1/100ths, 1/8ths and 1/64ths, and time for
numbers in 1/60ths. Sure, there's the odd huge database of weather and
satellite information, but the majority of the biggies are full of exact
numbers, not inexact physical measurements.

Similarly, the majority of small and mid size corporate databases are
much more likely to contain pure numbers than physical approximations.

Now the typical HOME user with a hard drive full of MP3s and porn has
got a preponderance of approximate numbers, if you want to count those
data files.
From: Hartmann Schaffer
Subject: Re: demonic numbers !
Date: 
Message-ID: <glyOd.4281$P_3.27683@newscontent-01.sprint.ca>
Cameron MacKinnon wrote:
> Coby Beck wrote:
> 
>> And while I may agree that the vast majority of naive users assume
>> typing 98.6 means 98.600000000000000000000000000000000000000000... to
>> the computer (because computers are real smart) I daresay the vast
>> majority of decimal numbers in data files are NOT and NEVER WERE the
>> result of computations with rationals.
> 
> 
> This is a difficult argument, but I'd say you're wrong. If you look at
> the world's largest databases, they're rationals. Filthy lucre accounts
> for numbers expressed in 1/100ths, 1/8ths and 1/64ths, and time for
> numbers in 1/60ths. Sure, there's the odd huge database of weather and
> satellite information, but the majority of the biggies are full of exact
> numbers, not inexact physical measurements.

all very nice, but even if your statement is correct, there are still 
numerous data files left that are the result of measurements

> ...

hs
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87k6pgjsqw.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> This is a difficult argument, but I'd say you're wrong. If you look at
> the world's largest databases, they're rationals. Filthy lucre accounts
> for numbers expressed in 1/100ths, 1/8ths and 1/64ths, and time for
> numbers in 1/60ths. Sure, there's the odd huge database of weather and
> satellite information, but the majority of the biggies are full of exact
> numbers, not inexact physical measurements.

Most financial data is the result of approximate solving of large matrix
equations or of partial differential equations. Time is also rounded,
sometimes to the nanosecond, sometimes to the millisecond, sometimes to
the second, sometimes to the minute, sometimes to the day, sometimes to
the year... and probably off from UTC by as much as a second or two if
it claims to be rounded to the millisecond... sometimes worse.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Kent M Pitman
Subject: Re: demonic numbers !
Date: 
Message-ID: <uzmyd5vh2.fsf@nhplace.com>
"Christophe Turle" <······@nospam.com> writes:

> It was implictly implied that if i use one notation as the reading one, it 
> should also be used as the writting one. so It should be displayed as 
> "98.6". Of course some pretty-printer variables could change this behavior.

This is a hilarious example to have chosen, by the way.

As the tale was related to me (and I haven't ever bothered to check, so
anyone with superior knowledge of history should chime in and correct me),
the human body temperature was originally pegged in Centigrade (37) and not
to any remarkably special precision, as I understand it.  That is, it might
be plus or minus a degree Centigrade.  But, as I was told, it was later
translated to Fahrenheit by the ordinary formula 9/5C+32=F, such that
it was projected into Fahrenheit without attempt to carry over the margin
of error.  As it was related to me, most people believe 98.6 is +/- .1 since
it's expressed to fine precision.  But in fact it might be +/- 1.8, since 
that's how many Fahrenheit degrees in a centigrade degree.

So whether they expected 98.6 to mean 98.600000 or they got a small degree
of error, it pales compared to the loss of error that the "ordinary person"
(you remember that person--the one who has superior knowledge of math to us
computer programmers and knows how better to format things and what better
to expect from computations) probably does without even thinking about it.

As a further aside, I've been told that another example of attempts to
adjust expressions from one venue to another, adjusting weirdly for
units, is the translation of the original expression "7 stone
weakling" (which uses the old unit of measure where 1 stone = 14
pounds) to "98 pound weakling", again suggesting that the last digit
carries meaning.

(And even if what I've been told about these turns out to be somehow wrong,
I'm sure the general paradigm that they purport to illustrate is still
active, and says something important about armchair understanding of math.)
From: Don Geddis
Subject: Re: demonic numbers !
Date: 
Message-ID: <87oeetv06p.fsf@sidious.geddis.org>
Kent M Pitman <······@nhplace.com> wrote on Wed, 09 Feb 2005:
> As the tale was related to me (and I haven't ever bothered to check, so
> anyone with superior knowledge of history should chime in and correct me),
> the human body temperature was originally pegged in Centigrade (37) and not
> to any remarkably special precision, as I understand it.  That is, it might
> be plus or minus a degree Centigrade.  But, as I was told, it was later
> translated to Fahrenheit by the ordinary formula 9/5C+32=F, such that
> it was projected into Fahrenheit without attempt to carry over the margin
> of error.

The Fahrenheit scale was created well before Celsius (1724 vs. 1742).  It is
likely that an attempt was made for human body temperature to be 100 degrees
Fahrenheit.  I think the error was that average human body temperature is
actually a few degrees lower than was originally thought when the Fahrenheit
scale was being created.

Some possible origins of the scale can be seen here:
        http://en.wikipedia.org/wiki/Fahrenheit
I've never heard the story that the 98.6 number was a calculation from
Celsius.

Celsius has better real-world connections, with 0 degree being the freezing
point of water, and 100 degrees being the boiling point.

        -- Don
_______________________________________________________________________________
Don Geddis                  http://don.geddis.org/               ···@geddis.org
If the bark is the skin of the tree, then what are the acorns?  You don't want
to know.
	-- Deep Thoughts, by Jack Handey [1999]
From: Peter Seibel
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3d5v9gply.fsf@javamonkey.com>
Don Geddis <···@geddis.org> writes:

> Celsius has better real-world connections, with 0 degree being the
> freezing point of water, and 100 degrees being the boiling point.

At sea level.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87d5v8jsjd.fsf@nyct.net>
Peter Seibel <·····@javamonkey.com> writes:

> Don Geddis <···@geddis.org> writes:
>
>> Celsius has better real-world connections, with 0 degree being the
>> freezing point of water, and 100 degrees being the boiling point.
>
> At sea level.

Actually, that varies. It's at 760 torr. :)

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <8u2dndEjLPim5JffRVn-1A@golden.net>
Kent M Pitman wrote:
> As the tale was related to me (and I haven't ever bothered to check, so
> anyone with superior knowledge of history should chime in and correct me),
> the human body temperature was originally pegged in Centigrade (37) and not
> to any remarkably special precision, as I understand it.  That is, it might
> be plus or minus a degree Centigrade.  But, as I was told, it was later
> translated to Fahrenheit by the ordinary formula 9/5C+32=F, such that
> it was projected into Fahrenheit without attempt to carry over the margin
> of error.  As it was related to me, most people believe 98.6 is +/- .1 since
> it's expressed to fine precision.  But in fact it might be +/- 1.8, since 
> that's how many Fahrenheit degrees in a centigrade degree.

I've heard that Fahrenheit measured the body temperature of a horse and 
used that as 100 degrees. There seems to be a lot of uncertainty about 
the exact origin of the numbers in his scale, though.
From: Bruce Stephens
Subject: Re: demonic numbers !
Date: 
Message-ID: <87bratqvoh.fsf@cenderis.demon.co.uk>
Kent M Pitman <······@nhplace.com> writes:

[...]

> This is a hilarious example to have chosen, by the way.
>
> As the tale was related to me (and I haven't ever bothered to check,
> so anyone with superior knowledge of history should chime in and
> correct me), the human body temperature was originally pegged in
> Centigrade (37) and not to any remarkably special precision, as I
> understand it.  That is, it might be plus or minus a degree
> Centigrade.  But, as I was told, it was later translated to
> Fahrenheit by the ordinary formula 9/5C+32=F, such that it was
> projected into Fahrenheit without attempt to carry over the margin
> of error.  As it was related to me, most people believe 98.6 is +/-
> .1 since it's expressed to fine precision.  But in fact it might be
> +/- 1.8, since that's how many Fahrenheit degrees in a centigrade
> degree.

That's confirmed by a quick web search (i.e., it's on the internet, so
it must be true): <http://www.fallacyfiles.org/fakeprec.html>.

So 98.6 can't be a precise average, either (or not an accurate one,
anyway), if the average is in fact 98.2.  (About a year ago I spent a
couple of weeks in hospital, and amongst other things they measure my
temperature several times a day.  I forget the range that they
regarded as normal, but I think it was about 1C either way.

<http://ts.nist.gov/ts/htdocs/200/202/metrsty3.htm> is a Metric Style
Guide from NIST.  Amusingle, under Conversions it warns not to use
more precision than is justified, but in the table just above, it
lists normal body temperature as 37C, 98.6F.  (It *does* give the
correct pronounciation of km, IMHO, although I fear the wrong one has
now won.)

[...]
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pcopsz978us.fsf@shuttle.math.ntnu.no>
+ Kent M Pitman <······@nhplace.com>:

| the human body temperature was originally pegged in Centigrade (37)

To take this completely off topic, what do Americans have against
Celsius?  Is there a problem relating to Swedes?

  http://en.wikipedia.org/wiki/Celsius

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: William Bland
Subject: Re: demonic numbers !
Date: 
Message-ID: <pan.2005.02.09.21.09.54.246798@abstractnonsense.com>
On Wed, 09 Feb 2005 21:54:03 +0100, Harald Hanche-Olsen wrote:

> + Kent M Pitman <······@nhplace.com>:
> 
> | the human body temperature was originally pegged in Centigrade (37)
> 
> To take this completely off topic, what do Americans have against
> Celsius?  Is there a problem relating to Swedes?
> 
>   http://en.wikipedia.org/wiki/Celsius


From what I've heard, the Fahrenheit scale was designed with one end at
the temperature of a horse, and the other end at the temperature at which
an equal mix of ice and salt begins to melt.

Yeah sure, *much* more useful than freezing and boiling point[1] ;-)

Cheers,
	Bill.

[1] Usual disclaimers about purity of water, atmospheric pressure, etc.
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <uoeeta1gk.fsf@news.dtpq.com>
Harald Hanche-Olsen <······@math.ntnu.no> writes:

> + Kent M Pitman <······@nhplace.com>:
> 
> | the human body temperature was originally pegged in Centigrade (37)
> 
> To take this completely off topic, what do Americans have against
> Celsius?  Is there a problem relating to Swedes?

No, it's just another metric thing, something that we're not
accustomed to.   If we went metric, we'd change scales.

We use C and K for science, but F for cooking and air.
From: Lars Brinkhoff
Subject: Re: demonic numbers !
Date: 
Message-ID: <85is50g746.fsf@junk.nocrew.org>
> Harald Hanche-Olsen <······@math.ntnu.no> writes:
> > + Kent M Pitman <······@nhplace.com>:
> > | the human body temperature was originally pegged in Centigrade (37)
> > To take this completely off topic, what do Americans have against
> > Celsius?  Is there a problem relating to Swedes?
> > http://en.wikipedia.org/wiki/Celsius

······@news.dtpq.com (Christopher C. Stacy) writes:
> No, it's just another metric thing, something that we're not
> accustomed to.  If we went metric, we'd change scales.  We use C and
> K for science, but F for cooking and air.

William Bland <·······@abstractnonsense.com> writes:
> From what I've heard, the Fahrenheit scale was designed with one end
> at the temperature of a horse, and the other end at the temperature
> at which an equal mix of ice and salt begins to melt.  Yeah sure,
> *much* more useful than freezing and boiling point[1] ;-)

I believe what Harald intended to ask was, why do Americans use the
word "Centigrade" instead of "Celsius"?

-- 
Lars Brinkhoff,         Services for Unix, Linux, GCC, HTTP
Brinkhoff Consulting    http://www.brinkhoff.se/
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pcod5v8qouj.fsf@shuttle.math.ntnu.no>
+ Lars Brinkhoff <·········@nocrew.org>:

| I believe what Harald intended to ask was, why do Americans use the
| word "Centigrade" instead of "Celsius"?

That was indeed what I intended to ask.
Dammit, that is what i /thought/ I asked.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Thomas F. Burdick
Subject: Re: demonic numbers !
Date: 
Message-ID: <xcv4qgjik6j.fsf@conquest.OCF.Berkeley.EDU>
Harald Hanche-Olsen <······@math.ntnu.no> writes:

> + Kent M Pitman <······@nhplace.com>:
> 
> | the human body temperature was originally pegged in Centigrade (37)
> 
> To take this completely off topic, what do Americans have against
> Celsius?  Is there a problem relating to Swedes?
> 
>   http://en.wikipedia.org/wiki/Celsius

The Wiki page you cited pretty much supplies the answer.  For the
first 200 years of its existence, it was called "Centigrade".  Given
that the US doesn't use the scale widely, why /would/ you expect
Americans to track the name change?  "Celsius" is used sometimes,
though -- I don't think anyone really notices which word someone uses.
From: Kent M Pitman
Subject: Re: demonic numbers !
Date: 
Message-ID: <umzubf0w4.fsf@nhplace.com>
···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Harald Hanche-Olsen <······@math.ntnu.no> writes:
> 
> > + Kent M Pitman <······@nhplace.com>:
> > 
> > | the human body temperature was originally pegged in Centigrade (37)
> > 
> > To take this completely off topic, what do Americans have against
> > Celsius?  Is there a problem relating to Swedes?
> > 
> >   http://en.wikipedia.org/wiki/Celsius
> 
> The Wiki page you cited pretty much supplies the answer.  For the
> first 200 years of its existence, it was called "Centigrade".  Given
> that the US doesn't use the scale widely, why /would/ you expect
> Americans to track the name change?  "Celsius" is used sometimes,
> though -- I don't think anyone really notices which word someone uses.

I find it amusing that proponents of a naming scheme (metric) which has 
descriptive ("centigrade" = "100 parts") would want to see this renamed
to honor a person.  Now we just need researchers with name like "John Foot",
"Sally Furlong", "Chris Yard", and so on to name the meter, and its 
boringly-named friends after and we'll REALLY spice up the metric system
namingwise.

It may not be "with the times" that I sometimes still refer to Celcius as
Centigrade, but certainly one is harder pressed to say it's ill-reasoned.
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pcobraqx09b.fsf@shuttle.math.ntnu.no>
+ Kent M Pitman <······@nhplace.com>:

| I find it amusing

That is good, for otherwise the tenor of your post would seem to
indicate you took my objection way too seriously.

| that proponents of a naming scheme (metric) which has descriptive
| ("centigrade" = "100 parts") would want to see this renamed to honor
| a person.

Well, the metric system is more than a naming scheme.  And anyway, it
is already full of personal names - Ampere, Coulomb, Watt, Newton, the
list goes on and on.  And one reason for avoiding "centigrade" was
mentioned in the wikipedia page referenced earlier: It is very
confusing when used in a system that uses the "centi" prefix for other
purposes.  It makes you wonder what a grade is.

| Now we just need researchers with name like "John Foot", "Sally
| Furlong", "Chris Yard", and so on to name the meter, and its
| boringly-named friends after and we'll REALLY spice up the metric
| system namingwise.

Ah, you're not so serious after all.  Good.  How about "Rod Chain"?
No, that sounds more like an S&M porn star.

| It may not be "with the times" that I sometimes still refer to
| Celcius as Centigrade, but certainly one is harder pressed to say
| it's ill-reasoned.

I never argued that.  I just wondered if it was reasoned at all.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <u3bw2ljx3.fsf@news.dtpq.com>
Harald Hanche-Olsen <······@math.ntnu.no> writes:
> Well, the metric system is more than a naming scheme.  And anyway, it
> is already full of personal names - Ampere, Coulomb, Watt, Newton,

Aren't those all just the traditional units that predate the "metric" system?  
We have always used all of those.  When people in the USA colloquially refer
to "the metric system", they are only talking about the measurement of length, 
weight, temperature. and pressure.
From: Jens Axel Søgaard
Subject: Re: demonic numbers !
Date: 
Message-ID: <420d4ee2$0$302$edfadb0f@dread12.news.tele.dk>
Christopher C. Stacy wrote:

> Harald Hanche-Olsen <······@math.ntnu.no> writes:
> 
>>Well, the metric system is more than a naming scheme.  And anyway, it
>>is already full of personal names - Ampere, Coulomb, Watt, Newton, 
> 
> Aren't those all just the traditional units that predate the "metric" system?  
> We have always used all of those.  When people in the USA colloquially refer
> to "the metric system", they are only talking about the measurement of length, 
> weight, temperature. and pressure.

Makes sense.

Some of the units like Newton are related to the unit of length though.

     1 N = 1 kg * m / s^2

-- 
Jens Axel Søgaard
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <uis4yr0xh.fsf@news.dtpq.com>
Jens Axel S�gaard <······@soegaard.net> writes:

> Christopher C. Stacy wrote:
> 
> > Harald Hanche-Olsen <······@math.ntnu.no> writes:
> >
> >>Well, the metric system is more than a naming scheme.  And anyway, it
> >> is already full of personal names - Ampere, Coulomb, Watt, Newton,
> > Aren't those all just the traditional units that predate the
> > "metric" system?  We have always used all of those.  When people in
> > the USA colloquially refer
> > to "the metric system", they are only talking about the measurement
> > of length, weight, temperature. and pressure.
> 
> Makes sense.
> 
> Some of the units like Newton are related to the unit of length though.
>      1 N = 1 kg * m / s^2


Computations involving force don't come up in the kitchen.
I'm not a mechanical or civil engineer, so I don't know what
people use for force -- sometimes dynes and ergs.
Physicists use Newtons. Engineers here commonly use p/si 
for pressure, though.

As I said, though, the main way in which the USA is not metric is 
in the kitchen and in measuring distances.  So, ounces and pounds
and gallons and feet and inches.  Automobile mechanics have two
sets of tools.  But science is just standard everywhere, I think.

Except for automobile mechanics, people here don't think anything
is broken.  And unlike the old world, we don't have any economic
need to be unit-compatiable with our neighbors (although they may
feel the need to be compatible with us).   Rumors to the contrary,
it's not that "global" a world.
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pcoy8dugh2f.fsf@shuttle.math.ntnu.no>
+ ······@news.dtpq.com (Christopher C. Stacy):

| Computations involving force don't come up in the kitchen.

Well, they do of course, except it is invisible to the user.  Whenever
you weigh something, you are really measuring the force of gravity
acting on the object.  But your scale converts that into mass,
assuming a standard acceleration of gravity.  I, too, tend to think
that a normal adult human weighs on the order of 80 kg.  If someone
tells me they weigh 800 N, it'll take me a little while to figure out
what they mean.  (And most people won't have a clue.)

| I'm not a mechanical or civil engineer, so I don't know what
| people use for force -- sometimes dynes and ergs.

In practice, even over here, people use kilograms, even though it is
utterly wrong.  If the human race ever gets off this mudball and start
living in space, that will become impractical.

| As I said, though, the main way in which the USA is not metric is in
| the kitchen and in measuring distances.

And in outer space?  ;->  Okay, that was a cheap shot.  The Mars
rovers have certainly been a success, so we may hope that NASA have
learned that particular lesson.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <usm417kvd.fsf@news.dtpq.com>
Harald Hanche-Olsen <······@math.ntnu.no> writes:

> + ······@news.dtpq.com (Christopher C. Stacy):
> 
> | Computations involving force don't come up in the kitchen.
> 
> Well, they do of course, except it is invisible to the user.

No, the cook in the kitchen never does any force computations.
(The scales he uses are measuring force, but that has nothing
to do with a human doing a computation, or needing to know
about Newtons.)  Cooks here use volume measurements or weigh
things in pounds and ounces.

> And in outer space?  ;->  Okay, that was a cheap shot.  

Huh? Is that where you're writing from?
It explains a lot.
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pco8y5tguwy.fsf@shuttle.math.ntnu.no>
+ ······@news.dtpq.com (Christopher C. Stacy):

| Harald Hanche-Olsen <······@math.ntnu.no> writes:
| 
| > And in outer space?  ;->  Okay, that was a cheap shot.
| 
| Huh? Is that where you're writing from?

  Unfortunately not.  Well, actually, I sort of like it down here.

| It explains a lot.

Hardly, but this explains what I meant:

  http://www.space.com/news/mco_report-b_991110.html

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87zmxybnir.fsf@nyct.net>
······@news.dtpq.com (Christopher C. Stacy) writes:

> No, the cook in the kitchen never does any force computations.
> (The scales he uses are measuring force, but that has nothing
> to do with a human doing a computation, or needing to know
> about Newtons.)  Cooks here use volume measurements or weigh
> things in pounds and ounces.

... which are units of force, not of mass. Hence the use of psi as the
unit of pressure (force per unit area).

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <uzmxye2ig.fsf@news.dtpq.com>
Rahul Jain <·····@nyct.net> writes:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> > No, the cook in the kitchen never does any force computations.
> > (The scales he uses are measuring force, but that has nothing
> > to do with a human doing a computation, or needing to know
> > about Newtons.)  Cooks here use volume measurements or weigh
> > things in pounds and ounces.
> 
> ... which are units of force, not of mass. Hence the use of psi as the
> unit of pressure (force per unit area).

The cooks are not doing computations based on gravitational acceleration.
They are using measuring cups, in the units I indicated.
From: Thomas A. Russ
Subject: Re: demonic numbers !
Date: 
Message-ID: <ymir7jj0yym.fsf@sevak.isi.edu>
Harald Hanche-Olsen <······@math.ntnu.no> writes:

> ...  And one reason for avoiding "centigrade" was
> mentioned in the wikipedia page referenced earlier: It is very
> confusing when used in a system that uses the "centi" prefix for other
> purposes.  It makes you wonder what a grade is.

How interesting.  It made me realize that I had so thoroughly chunked
the word "Centigrade" that I didn't even think about it as sharing a
prefix.  Quite an interesting linguistic/cognitive revelation.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Kent M Pitman
Subject: Re: demonic numbers !
Date: 
Message-ID: <ull9qh90c.fsf@nhplace.com>
···@sevak.isi.edu (Thomas A. Russ) writes:

> Harald Hanche-Olsen <······@math.ntnu.no> writes:
> 
> > ...  And one reason for avoiding "centigrade" was
> > mentioned in the wikipedia page referenced earlier: It is very
> > confusing when used in a system that uses the "centi" prefix for other
> > purposes.  It makes you wonder what a grade is.
> 
> How interesting.  It made me realize that I had so thoroughly chunked
> the word "Centigrade" that I didn't even think about it as sharing a
> prefix.  Quite an interesting linguistic/cognitive revelation.

Funny, for me it was just the opposite.  But in my case I knew that the 
Spanish word for "degree" is "grado", and the Portuguese is "grau", which
I suspect means it comes from the same root as "gradation", which makes a
reasonable gradati--er,--degree of sense.
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pco7jlex01g.fsf@shuttle.math.ntnu.no>
+ ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick):

| For the first 200 years of its existence, it was called
| "Centigrade".  Given that the US doesn't use the scale widely, why
| /would/ you expect Americans to track the name change?

Um, no, I guess you have a point there.  And the much bigger question of
why the US resists metrization really does not belong on
comp.lang.lisp, so this is a good time for me to shut my mouth on this
issue.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Harald Hanche-Olsen
Subject: Re: demonic numbers !
Date: 
Message-ID: <pco3bw8ngbx.fsf@shuttle.math.ntnu.no>
+ Barry Margolin <······@alum.mit.edu>:


| Floating point works OK in scientific applications, where the readings
| that come from measurement devices are inherently noisy.  Although what
| you really might want is an arithmetic system that automatically
| incorporates margins of error, e.g. 100.1+/-.05 + 51.3+/-.05 =
| 151.4+/-.1.

Interval arithmetic is very difficult to work with and to do useful
things with, as far as I understand it.  One part of the problem is
that error estimates tend to explode because interdependences between
error terms are lost.  For a trivial example, consider x/(1+x).  If
you compute this in the obvious way (add 1 to x, then divide x by the
result) then the lower bound for the result will use the lower bound
for x in the numerator (assuming x is positive), and simultaneously
the upper bound for x in the denominator.  This is obviously wrong,
but getting rid of this sort of behaviour in a general, less
transparent setting certainly looks highly nontrivial to me.  For a
general solution, you would need not only to assign an interval for
each variable, but a region in n-space for every n-tuple of
variables.  Ouch!

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: jayessay
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3lla0j7r2.fsf@rigel.goldenthreadtech.com>
"Christophe Turle" <······@nospam.com> writes:

> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
> ··············@cam.ac.uk...
...
> > draws, but it is prety close.)  You seem to be implying that your
> > personal vocabulary is somehow the unique "human convention" -- and,
> > I'm sorry to have to tell you, it isn't.
> 
> it is.
> 
> Your community is not human community it is a subset of it.
> 
> want a proof ?
> 
> > "if I write 0.1111, I imply something different from if I write 
> > 1111/10000"
> 
> I bet that the vast majority of humans don't agree with your sentence and 
> will tell you the contrary. When they write 0.1111 they mean 1111/10000.

You are taking a single data point, yourself, and extrapolating to all
of humanity.  Do you have any idea how _insane_ this sounds?


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <Zf2dnd5kN4qQaZrfRVn-qA@golden.net>
jayessay wrote:
> "Christophe Turle" <······@nospam.com> writes:
> 
> 
>>"Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de news: 
>>··············@cam.ac.uk...
> 
>>>"if I write 0.1111, I imply something different from if I write 
>>>1111/10000"
>>
>>I bet that the vast majority of humans don't agree with your sentence and 
>>will tell you the contrary. When they write 0.1111 they mean 1111/10000.
> 
> 
> You are taking a single data point, yourself, and extrapolating to all
> of humanity.  Do you have any idea how _insane_ this sounds?

Not at all. I learned that equivalence in school before I was ten. It is 
quite reasonable to assume that most everyone who's had a primary 
education with Arabic numerals is also aware of the equivalence. That's 
billions of people. Some computer programmers have unlearned this.
From: Christophe Rhodes
Subject: Re: demonic numbers !
Date: 
Message-ID: <sqll9zxxmo.fsf@cam.ac.uk>
Cameron MacKinnon <··········@clearspot.net> writes:

> jayessay wrote:
>> You are taking a single data point, yourself, and extrapolating to
>> all of humanity.  Do you have any idea how _insane_ this sounds?
>
> Not at all. I learned that equivalence in school before I was
> ten. 

Irrespective of the fact that you too are guilty of the extrapolation
from a single data point, or two, to all of humanity, despite
demonstrated counterexamples, did you not also learn in school that
everything was made up of indivisible atoms, that the definition of a
species was the ability to provide fertile offspring, that the planets
go round the sun in ellipses, that dividing by zero is not allowed,
that you shouldn't end a sentence with a preposition, that the First
World War was caused by the assassination of Archduke Ferdinand?

> It is quite reasonable to assume that most everyone who's had a
> primary education with Arabic numerals is also aware of the
> equivalence.

Except for those people who have gone beyond primary education.

Christophe
From: jayessay
Subject: Re: demonic numbers !
Date: 
Message-ID: <m3d5vakitd.fsf@rigel.goldenthreadtech.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> jayessay wrote:
> > "Christophe Turle" <······@nospam.com> writes:
> >
> >> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de
> >> news: ··············@cam.ac.uk...
> >
> >>> "if I write 0.1111, I imply something different from if I write
> >>> 1111/10000"
> >>
> >> I bet that the vast majority of humans don't agree with your
> >> sentence and will tell you the contrary. When they write 0.1111
> >> they mean 1111/10000.
> > You are taking a single data point, yourself, and extrapolating to
> > all
> > of humanity.  Do you have any idea how _insane_ this sounds?
> 
> Not at all. ...

And here I was thinking you were one of the saner people.  Oh well,
reality sucks.

/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Michael Kappert
Subject: Re: demonic numbers !
Date: 
Message-ID: <36vg9gF570tg8U1@individual.net>
Cameron MacKinnon wrote:
> jayessay wrote:
>> "Christophe Turle" <······@nospam.com> writes:
>>> "Christophe Rhodes" <·····@cam.ac.uk> a �crit dans le message de 
>>>> "if I write 0.1111, I imply something different from if I write 
>>>> 1111/10000"
>>>
>>> I bet that the vast majority of humans don't agree with your sentence 
>>> and will tell you the contrary. When they write 0.1111 they mean 
>>> 1111/10000.
>>
>> You are taking a single data point, yourself, and extrapolating to all
>> of humanity.  Do you have any idea how _insane_ this sounds?
> 
> 
> Not at all. I learned that equivalence in school before I was ten. 

I agree.
Otoh, people usually mean *approximate* values, when they're using
fractions, don't they? "I'll be there in 1/2 hour." Does this mean
programming languages must read 1111/10000 as a float?
:^)

Michael
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <4YKdndDCo7DJDpffRVn-gQ@golden.net>
Michael Kappert wrote:
> Otoh, people usually mean *approximate* values, when they're using 
> fractions, don't they? "I'll be there in 1/2 hour." Does this mean 
> programming languages must read 1111/10000 as a float? :^)

It's an expectations thing. No neophyte would be surprised if they gave
a machine a batch of numbers (which were, in the person's mind,
approximate) and the machine took the numbers as exact and produced an
exact answer. A good portion of the population ARE surprised when they
give machines numbers to add and the machines take license with the
numbers and produce approximate answers.

There's a popular phrase in the United States right now: "Rounding
error." This phrase hasn't been in use in the general population for
very long. Where do you suppose it came from? Could it be that it's been
the computer guy's explanation to the broader user population whenever
their computers produce slightly wrong results?
From: Fred Gilham
Subject: Re: demonic numbers !
Date: 
Message-ID: <u7ll9xe45h.fsf@snapdragon.csl.sri.com>
Cameron MacKinnon <··········@clearspot.net> writes:
> It's an expectations thing. No neophyte would be surprised if they gave
> a machine a batch of numbers (which were, in the person's mind,
> approximate) and the machine took the numbers as exact and produced an
> exact answer. A good portion of the population ARE surprised when they
> give machines numbers to add and the machines take license with the
> numbers and produce approximate answers.

I only have one comment about this: "heavy boots."

-- 
Fred Gilham                                         ······@csl.sri.com
If there is one consistent theme to be found in the great works of the
20th century, it seems to me to be the presentation of a doomed quest:
the search to find something capable of filling that great void that
has been left in the soul of man by the repudiation of God.
                                            --- Skylar Hamilton Burris
From: Michael Kappert
Subject: Re: demonic numbers !
Date: 
Message-ID: <36vk3oF52gfshU1@individual.net>
Cameron MacKinnon wrote:
> Michael Kappert wrote:
> 
>> Otoh, people usually mean *approximate* values, when they're using 
>> fractions, don't they? "I'll be there in 1/2 hour." Does this mean 
>> programming languages must read 1111/10000 as a float? :^)
> 
> 
> It's an expectations thing. No neophyte would be surprised if they gave
> a machine a batch of numbers (which were, in the person's mind,
> approximate) and the machine took the numbers as exact and produced an
> exact answer. A good portion of the population ARE surprised when they
> give machines numbers to add and the machines take license with the
> numbers and produce approximate answers.

It doesn't really make sense to talk about "a good portion of
/the population/", when they don't even know how to switch
on a computer?

What I like to know is, why you do, if you do, want to
conflate the interpretation of two distinct representations
into one, when it's /not/ really hard to remember how the
computer understands them, and they naturally match the
respective data types?

Michael


> There's a popular phrase in the United States right now: "Rounding
> error." This phrase hasn't been in use in the general population for
> very long. Where do you suppose it came from? Could it be that it's been
> the computer guy's explanation to the broader user population whenever
> their computers produce slightly wrong results?
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <t5GdndbvXIFSO5ffRVn-sw@golden.net>
Michael Kappert wrote:
> What I like to know is, why you do, if you do, want to conflate the
> interpretation of two distinct representations into one, when it's
> /not/ really hard to remember how the computer understands them, and
> they naturally match the respective data types?

1) The majority of computer users don't see the distinction - it's a
hard sciences/computer science construct

2) The majority of computer programmers aren't competent users of
floating point anyway, and they just pass its quirky behaviour straight
through to the users, above, who now think that our computers can't add.

3) Even the majority of graduates of top flight CS schools aren't
trained in numerical analysis

4) Expert computer programmers working in the most validated
environments screw it up sometimes (see "Patriot Missile Failure").

5) When someone means for a computer to compute something exactly and
the computer only approximates it, the results can be disastrous. There
must be far fewer cases where the computer's getting the answer "too
right" would lead to failure.

On balance, I believe not making floating point the default would lead
to a safer, less buggy world. As I've said, most of the people who
really need blindingly fast floats are doing them on custom silicon
already anyway.

If YOU think that current behaviour is really worth the pain that
inexperienced coders manage to cause with it, I'd like to hear why. What
are the great benefits to this format which should make it the default,
outweighing all this risk?
From: ······@earthlink.net
Subject: Re: demonic numbers !
Date: 
Message-ID: <1108053943.604775.282250@f14g2000cwb.googlegroups.com>
Cameron MacKinnon wrote:
> If YOU think that current behaviour is really worth the pain that
> inexperienced coders manage to cause with it, I'd like to hear why.
What
> are the great benefits to this format which should make it the
default,
> outweighing all this risk?

It's not "the default".  It's the agreed on way floating point numbers
are expressed in the vast majority of computer languages.  If you don't
intend to be using floating point numbers, you shouldn't use it.

The fact that people misuse floating point doesn't change any of this.

BTW - the proposed 0.11 syntax for 1/9 has nasty surprises.  Suppose
that I mean 11/100 and not 1/9.  Do I type 0.110?  If so, why isn't
that shorthand for 0.110110110 or 0.1101010 ... Or, was the proposal
that the . syntax was only for for decimal rationals?  That's even
worse - why should 11/100 be expressed so unlike 1/9?

> The problem is that Lisp number interpretation is not compatible
> with human one.

Except that it is; it's completely compatible with the way that almost
every other computer language expresses floats, even if that expression
is not consistent with how numbers are expressed in some other
contexts.

Yes, many lisps overload the integer expression to include bignums.
The careful reader will note that it does so for numbers that whose
expression is not defined in other languages.

It's usually a good idea to pick the right hill to die on.  Arguing for
a different float expression is like arguing that telephone and adding
machine keypads should be the same.
From: William D Clinger
Subject: Re: demonic numbers !
Date: 
Message-ID: <1108055897.952660.179980@c13g2000cwb.googlegroups.com>
Cameron MacKinnon wrote:
> If YOU think that current behaviour is really worth the pain that
> inexperienced coders manage to cause with it, I'd like to hear why.
> What are the great benefits to this format which should make it the
> default, outweighing all this risk?

There are two ways to go wrong.

If floating point is used for calculations that should have
been performed using exact arithmetic, then the answer will
probably be a little off from the correct answer.  If the
computation is complex or ill-conditioned, the answer might
be way off.

If exact representations are used for calculations that
should have been performed using floating point arithmetic,
then the answer, when obtained, will probably be just as
good as the desired answer.  If the computation is complex,
however, then the answer is likely to be far more precise,
which can be disconcerting; most users don't know how to
interpret a number whose printed representation fills more
than one page.  Furthermore the intermediate results may
consume so much space that the heap memory is exhausted,
or paging may slow the calculation by a factor of so many
thousands that the calculation becomes indistinguishable
from an infinite loop.  Under the best of circumstances,
the calculation will be only one or two decimal orders of
magnitude slower.

There is no substitute for programmers who have a clue.

Will
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <874qgkgqyf.fsf@nyct.net>
"William D Clinger" <··········@verizon.net> writes:

> Under the best of circumstances, the calculation will be only one or
> two decimal orders of magnitude slower.

Which could mean that your rocket keeps going in some useless direction
instead of turning to intercept the bomb that is headed your way that it
was supposed to destroy.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <878y5wjsc2.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> 4) Expert computer programmers working in the most validated
> environments screw it up sometimes (see "Patriot Missile Failure").

If they didn't use FP, the missile would have gone miles off target as
it waited 100 times too long for the computation of where to adjust its
heading.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Michael Kappert
Subject: Re: demonic numbers !
Date: 
Message-ID: <3722afF55k97pU1@individual.net>
Cameron MacKinnon wrote:
> Michael Kappert wrote:
>> What I like to know is, why you do, if you do, want to conflate the
>> interpretation of two distinct representations into one, when it's
>> /not/ really hard to remember how the computer understands them, and
>> they naturally match the respective data types?

> 2) The majority of computer programmers aren't competent users of
> floating point anyway, 
I just googled for "What Every Computer Scientist Should Know About 
Floating-point Arithmetic," by David Goldberg.
The first link that came up was
"Fortran Programmer's Guide: 6 - Floating-Poin#t Arithmetic"
which I think backs my assumption that programmers can hardly
escape learning about FP pitfalls.
Even "Just Java" by Peter van der Linden refers to Goldberg.

 > and they just pass its quirky behaviour straight
> through to the users,
I've never seen this, neither in niche AI applications nor
in mainstream business app. When a problem becomes obvious,
they fix (read: hack) it, or redesign.

>  above, who now think that our computers can't add.
No, in my experience they tell the programmers they can't program...

> 3) Even the majority of graduates of top flight CS schools aren't
> trained in numerical analysis
See 2).

> 4) Expert computer programmers working in the most validated
> environments screw it up sometimes (see "Patriot Missile Failure").
I read 28 people were killed in the incident, so it's hard to
reply without being cynical.

> 5) When someone means for a computer to compute something exactly and
> the computer only approximates it, the results can be disastrous. There
> must be far fewer cases where the computer's getting the answer "too
> right" would lead to failure.
No, I think you're oversimplifying.
A computer doesn't do anything for you unless you tell it.
And if you want to tell it, it'll make you think about the
problem as well as the coding.

> On balance, I believe not making floating point the default would lead
> to a safer, less buggy world. As I've said, most of the people who
> really need blindingly fast floats are doing them on custom silicon
> already anyway.
> 
> If YOU think that current behaviour is really worth the pain that
> inexperienced coders manage to cause with it,
Inexperienced coders aren't exactly the right choice form
programming missile control unit, are they?

 > I'd like to hear why. What
 > are the great benefits to this format which should make it the
 > default, outweighing all this risk?
I think it is very obvious that you just can't approach any
programming task without even thinking about the specific
requirements - space, performance, robustness, and many more specific.
In view of this, the burden of getting Lisp to read 3.14579
as a rational is negligible.

Michael
From: Thomas A. Russ
Subject: Re: demonic numbers !
Date: 
Message-ID: <ymi4qgo2qwc.fsf@sevak.isi.edu>
"Christophe Turle" <······@nospam.com> writes:

> "Barry Margolin" <······@alum.mit.edu> a �crit dans le message de news: 
> 
> > Sure, we could have made .1111 read as a rational, and require you to
> > write something like .1111e0 to get a float, but why should we have?  We
> > provided another syntax for rationals that didn't conflict with existing
> > practice -- if you want a rational, type 1111/10000.
> 
> Because these practices are bad. And i don't want to write 1111/10000 
> instead of .1111, it is not human conventional. It is just implementation 
> details.

These practices are not "bad".  They just happen to be inconvenient for
the particular problem you are trying to solve.  But instead you are
advocating a system where the inconvenience would impact most other users
of the input system who want a convenient way of getting floating point
numbers.

There are very good engineering and compatibility with programming
language conventions reaons to have floating point numbers entered the
way they are.

Now, given the perennial confusion that floating point numbers seem to
generate, particularly with some new programmers, I actually think that
programming language should have a reasonably convenient way to enter
decimal (rather than binary) floating point numbers and support
arithmetic on them.  Java has something in this direction (at least with
the support classes) but it isn't intergrated as well as the other
numeric types and the input formalism is not nearly as convenient.

It's actually too bad that the letter "d" is already taken for double
float, since it would have been a nice suffix to use for decimal
floats.  Actually, an extension to lisp to implement decimal floats
might be a nice direction for language evolution.

And it would also allow Christophe to solve his problem by binding
*READ-DEFAULT-FLOAT-FORMAT* to this new decimal float type.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <ulla03wwh.fsf@news.dtpq.com>
···@sevak.isi.edu (Thomas A. Russ) writes:
> It's actually too bad that the letter "d" is already taken for double
> float, since it would have been a nice suffix to use for decimal
> floats.

How about "$".
Just throwing in my $.02
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87zmycgt5l.fsf@nyct.net>
···@sevak.isi.edu (Thomas A. Russ) writes:

> Now, given the perennial confusion that floating point numbers seem to
> generate, particularly with some new programmers, I actually think that
> programming language should have a reasonably convenient way to enter
> decimal (rather than binary) floating point numbers and support
> arithmetic on them.

Interestingly enough, Lisp already has this. It's just
implementation-dependent which, if any, FP format has a FLOAT-RADIX of
10.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <gPCdnaGXWOQ7JZrfRVn-sw@golden.net>
Barry Margolin wrote:
> In article <·······················@news.free.fr>, "Christophe Turle"
> <······@nospam.com> wrote:
> 
> 
>> Yes, i know it's not lisp code. And this is in fact my problem, why
>> IN lisp, .1111 is read as a float ? yes, the spec is the spec. But
>> for this point, imho the spec is not good.
> 
> 
> Why shouldn't it read it as a float?  That's the way many dialects of
>  Lisp have been reading and writing floats for decades, and Common
> Lisp was designed to be compatible with preceding dialects like
> Maclisp where feasible.

There's a lot to be said for backward compatibility, but do you expect
numeric formats to be frozen in time forever? We've moved from ASCII to
Unicode, from 16 bit to 64 bit integers, from plain text to complicated
markup languages (sigh) and from kilobytes to gigabytes of storage.

> It's also consistent with most other programming languages -- numbers
>  with embedded decimal points are floating point.

True. So, just like other language fora, c.l.l has a trickling stream of
newbies who need to be told about floating point. And each one of those
earnest bug reporters probably represents ten programmers who have
noticed that their floats add a bit strangely but didn't investigate,
and a hundred programmers who didn't even notice.

You can claim that computers currently don't add like humans do and that
it is all the fault of poor education when programmers don't realize
this and write buggy software that blows up, sometimes literally. But I
think that if you're going to appropriate the symbols of the broader
culture (working in base 10, using conventional + - = notation), you've
a duty to make those symbols represent in the computer what they
represent in the minds of everyday people. Changing the definitions
slightly to save a few bits was an amazing hack back when RAM was
measured in bits and all of the users were pipe stress freaks and
crystallography weenies. Since then, the floating point impedance
mismatch between people and computers has been responsible for a lot of
pain.

Users of floating point enjoyed a ten year free ride from Intel et al,
constantly speeding up IEEE floats to benefit the 3D visualization
(gamer) crowd. Those people have now left the building, and are using
their own specialized 3D graphics silicon. Incidentally, IEEE is
updating 754 and merging in IEEE-854.

Much of my enthusiasm for Lisp is because it eliminates entire classes
of bugs which are endemic in virtually all software written in more
popular languages. As well, Lisp's wonderful malleability means that
those who need different semantics (such as our OP) can easily adapt it
to their needs.

But in the area of floating point, I would argue that Lisp is just as
broken as most other languages. For the average computer programmer,
floating point is an accident waiting to happen, not the ideal default.
Nor can it be said that Common Lisp is the best at supporting this
format, which admittedly has its uses. Where's directed rounding mode
support, or control over exception handling?
From: Kent M Pitman
Subject: Re: demonic numbers !
Date: 
Message-ID: <ur7jsb2zo.fsf@nhplace.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> Much of my enthusiasm for Lisp is because it eliminates entire classes
> of bugs which are endemic in virtually all software written in more
> popular languages. As well, Lisp's wonderful malleability means that
> those who need different semantics (such as our OP) can easily adapt it
> to their needs.
> 
> But in the area of floating point, I would argue that Lisp is just as
> broken as most other languages. For the average computer programmer,
> floating point is an accident waiting to happen, not the ideal default.
> Nor can it be said that Common Lisp is the best at supporting this
> format, which admittedly has its uses. Where's directed rounding mode
> support, or control over exception handling?

If you examine George Bush's recent inaugural address, you'll find
lots of similar desire to end all instances of certain classes of
problems for all time.

But it's not likely that the desire to do so will make it happen, and
there is credible reason to believe that his willingness to try will
bankrupt the United States and will leave us unable to pursue even
more modest goals in the future.

Lofty goals are great.  Understanding one's own limitations is good, too.

There is a reason that people who program have to learn how to talk to
computers and don't get to just do it as a natural consequence of their 
existing knowledge of English.

For example, arguments similar to those you and others have made here
are equally applicable to why LOOP should just implement English
instead of beating around the bush and using the pseudo-language fine
line distinctions that English speakers do not use.  English does not
make the "for/with" distinction that LOOP does, for example.  In
English, for and with are often synonyms in the way LOOP uses them,
and certainly do not have a scoping significance such as LOOP gives
them.

Formal languages are just that.  Formal.  They mean what they are
defined to mean and not otherwise.  Informal languages, like English,
are maleable, but make very bad programming languages as a consequence.

If you're willing to leave aside "mere" concerns of practicality and
instead to take it as a given not worthy of discussion that you have
access to unlimited cash resources (yes, the changes you propose can
be expressed in cash terms, running probably into the many millions of
dollars), that compatibility with existing practice doesn't matter,
that your point of view is canonically right and will conflict with no
other way of life anywhere (i.e., that people will throw roses as you
parade the streets in victory), and that everyone thinks this is the
one and only one central problem perturbing the world today, I'm not
sure why you're wasting your time here on petty language politics.
There's probably a job waiting for you in the US White House...
From: Cameron MacKinnon
Subject: Re: demonic numbers !
Date: 
Message-ID: <j9Gdne_d1J_IZ5rfRVn-3g@golden.net>
Kent M Pitman wrote:
> There is a reason that people who program have to learn how to talk 
> to computers and don't get to just do it as a natural consequence of 
> their existing knowledge of English.

Sure. The first thing they learn is that computers do EXACTLY what
they're programmed to, and the second is that all instructions must be
explicit, the computer being incapable of making value judgments. The
decimal input of binary floating point numerics immediately violates
both of these axioms as the computer rounds off the number to something
it (or Intel or the IEEE, but it's all the same to the student) finds
"close enough" and more convenient internally.

Programming literature doesn't typically begin by telling students that
floating point numbers are approximations; they're just introduced
spontaneously as datatypes, with the pleasant fiction that + adds them.
SICP deals with the problem in a footnote.

> For example, arguments similar to those you and others have made here
>  are equally applicable to why LOOP should just implement English 
> instead of beating around the bush and using the pseudo-language fine
>  line distinctions that English speakers do not use.  English does 
> not make the "for/with" distinction that LOOP does, for example.  In
>  English, for and with are often synonyms in the way LOOP uses them,
>  and certainly do not have a scoping significance such as LOOP gives
>  them.

I'd prefer not to get numeric ergonomics conflated with LOOP. I feel
that computer numerics are inherently an imitation of extra-computer
numerics, so their fidelity to the numerics that people use is an
appropriate metric. Iteration is much more of an ad hoc, computer
science specific problem. But I will note that iteration, like floating
point, is an area where many software bugs are hatched, across many
languages (though there's variance).

> Formal languages are just that.  Formal.  They mean what they are 
> defined to mean and not otherwise.  Informal languages, like English,
>  are maleable, but make very bad programming languages as a 
> consequence.

Floating point is probably the most abused formalism in CS. We should, I
suppose, all be writing programs that query the precision and base of
our FP implementations, calculating error bounds at each operation.
Better still, our compilers ought to be solving which of our datatypes
could be single precision without materially affecting the answers,
halving memory and bandwidth requirements. Entre nous, nobody codes FP
like this. The best answer you're likely to get from John Q. Programmer
is that he thinks {Intel,AMD,PowerPC,Sun} makes good chips, and he heard
that if you use double precision for everything you should be fine. To a
typical coder, all of the formalism was done by the IEEE, the chip
makers and the compiler writer, who chose a magical number of bits and
(he imagines) warranted that the answers would always be close enough.

Current practice would be easier to defend if 95% of the existing
codebase was a model of formally correct, well written, mostly bug free
FP and 95% of today's working programmers understood the semantics of
floating point, but 95% of them don't.

If neither the semantics of computer numerics nor the level of education
about them changes, the status quo is a small number of coders who can
proficiently use FP and a large number who write FP code which works
only by accident. Further, even the ones who know better are often
tempted to write accidental code, because the odds are so good.


Please don't compare my style with that of your fearless leader. It's
the new Godwinism.
From: jayessay
Subject: Re: demonic numbers !
Date: 
Message-ID: <m38y5yki0o.fsf@rigel.goldenthreadtech.com>
Cameron MacKinnon <··········@clearspot.net> writes:

> Kent M Pitman wrote:
> > There is a reason that people who program have to learn how to talk
> > to computers and don't get to just do it as a natural consequence of
> > their existing knowledge of English.
> 
> Sure. The first thing they learn is that computers do EXACTLY what
> they're programmed to, and the second is that all instructions must be
> explicit, the computer being incapable of making value judgments. The
> decimal input of binary floating point numerics immediately violates
> both of these axioms as the computer rounds off the number to something
> it (or Intel or the IEEE, but it's all the same to the student) finds
> "close enough" and more convenient internally.

As long as the FPUs or whatever are performing as specified, your
claim of "axiom violation" here is obviously just plain wrong.


> Programming literature doesn't typically begin by telling students
> that floating point numbers are approximations; they're just
> introduced spontaneously as datatypes, with the pleasant fiction
> that + adds them.  SICP deals with the problem in a footnote.

Now you've changed to talking about an education problem.


> Current practice would be easier to defend if 95% of the existing
> codebase was a model of formally correct, well written, mostly bug free
> FP and 95% of today's working programmers understood the semantics of
> floating point, but 95% of them don't.

Let's be realistic - the vast majority of programs don't care about FP
at all as they either don't use it at all or only in a couple places
for only the most simple (and harmless) calculations.


> Please don't compare my style with that of your fearless leader. It's
> the new Godwinism.

;-)


/Jon

-- 
'j' - a n t h o n y at romeo/charley/november com
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <uhdko3wm4.fsf@news.dtpq.com>
Kent M Pitman <······@nhplace.com> writes:
> For example, arguments similar to those you and others have made here
> are equally applicable to why LOOP should just implement English

Date: Thursday, 18 November 1982  17:49-EST
From: Alan Bawden <ALAN at SCRC-TENEX>
To:   INFO-COBOL, GMP
Re:   [acw: LET'S -- a new looping macro for lisp]

	Date: Wednesday, 17 November 1982  15:46-EST
	From: Dick
	Sender: DICK at MIT-OZ
	To:   *its at MIT-OZ, info-lispm at MIT-OZ, info-lisp at MIT-OZ
	Re:   LetS -- a new looping macro for lisp
	
	LetS is a new Lisp looping macro which makes it possible to
	write a wide class of loops as simple expressions.  For example,
	in order to sum up the positive elements of the one
	dimensional array A one need only wirte:
	
	(Rsum (Fgreater (Evector A)))
	
	LetS is described fully in AIM-680 which has just appeared.
	I will give a talk describing LetS in a few weeks.
	
	LetS is compatably with both MacLisp and LispMachine Lisp.
	It exists on the directories LIBSLP and LMLIB.
	
	Try it you will like it.
	
	Dick Waters
	
	Send all comments etc. to ····@OZ.
	


Date: Wednesday, 17 November 1982, 17:29-EST
From: Allan C. Wechsler <acw at scrc-vixen>
To:   BSG, fun
Re:   LET'S -- a new looping macro for lisp

LET'S is a new Lisp looping macro which makes it possible to
write a wide class of loops as simple expressions.  For example,
in order to sum up the positive elements of the one
dimensional array A one need only write:

(LET'S SUM UP THE POSITIVE ELEMENTS OF THE ONE-DIMENSIONAL ARRAY A)

and all of LMFS can be compressed to

(LET'S HAVE A SUPER WINNING FILE SYSTEM THAT DOES THE RIGHT THING
       ALL THE TIME)

For those who like to debug, you can leave off the ALL THE TIME modifier.

   --- Allan
From: Thomas A. Russ
Subject: Re: demonic numbers !
Date: 
Message-ID: <ymivf922a5w.fsf@sevak.isi.edu>
Chuckle.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87r7jogsr1.fsf@nyct.net>
······@news.dtpq.com (Christopher C. Stacy) writes:

> 	LetS is a new Lisp looping macro which makes it possible to
> 	write a wide class of loops as simple expressions.  For example,
> 	in order to sum up the positive elements of the one
> 	dimensional array A one need only wirte:
> 	
> 	(Rsum (Fgreater (Evector A)))

Is this where SERIES came from? Looks like E is how you name scanners, F
is how you name transducers (or maybe just "conditional and other
complex transducers"), and R is how you name collectors.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christopher C. Stacy
Subject: Re: demonic numbers !
Date: 
Message-ID: <uzmycxah3.fsf@news.dtpq.com>
Rahul Jain <·····@nyct.net> writes:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> > 	LetS is a new Lisp looping macro which makes it possible to
> > 	write a wide class of loops as simple expressions.  For example,
> > 	in order to sum up the positive elements of the one
> > 	dimensional array A one need only wirte:
> > 	
> > 	(Rsum (Fgreater (Evector A)))
> 
> Is this where SERIES came from? Looks like E is how you name scanners, F
> is how you name transducers (or maybe just "conditional and other
> complex transducers"), and R is how you name collectors.

Yup.
From: John Thingstad
Subject: Re: demonic numbers !
Date: 
Message-ID: <opslue7eiypqzri1@mjolner.upc.no>
On Mon, 07 Feb 2005 13:51:17 -0500, Cameron MacKinnon  
<··········@clearspot.net> wrote:

>
> But in the area of floating point, I would argue that Lisp is just as
> broken as most other languages. For the average computer programmer,
> floating point is an accident waiting to happen, not the ideal default.
> Nor can it be said that Common Lisp is the best at supporting this
> format, which admittedly has its uses. Where's directed rounding mode
> support, or control over exception handling?

I am not sure there is a grail that eliminates all errors.
There is no general way to eliminate roundoff error.
I have taken a sideways glance to Mathematica.
Here the epsilon given to routines is implicit unless specified.
This can, however, cause other more subtle bugs.
I have previously argued that a basic course in numerical analysis
should be a part of a computer science curriculum.
I think education raher than sweeping the problem under the
carpet is the way to go here.

-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
From: Barry Margolin
Subject: Re: demonic numbers !
Date: 
Message-ID: <barmar-2101DB.21375707022005@comcast.dca.giganews.com>
In article <······················@golden.net>,
 Cameron MacKinnon <··········@clearspot.net> wrote:

> Barry Margolin wrote:
> > In article <·······················@news.free.fr>, "Christophe Turle"
> > <······@nospam.com> wrote:
> > 
> > 
> >> Yes, i know it's not lisp code. And this is in fact my problem, why
> >> IN lisp, .1111 is read as a float ? yes, the spec is the spec. But
> >> for this point, imho the spec is not good.
> > 
> > 
> > Why shouldn't it read it as a float?  That's the way many dialects of
> >  Lisp have been reading and writing floats for decades, and Common
> > Lisp was designed to be compatible with preceding dialects like
> > Maclisp where feasible.
> 
> There's a lot to be said for backward compatibility, but do you expect
> numeric formats to be frozen in time forever? We've moved from ASCII to
> Unicode, from 16 bit to 64 bit integers, from plain text to complicated
> markup languages (sigh) and from kilobytes to gigabytes of storage.

And for the most part, backward compatibility is maintained as we 
progress.  I don't know much about Unicode, but I'll bet that the 
characters it shares with ASCII use the original ASCII encoding.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
From: Bruce Stephens
Subject: Re: demonic numbers !
Date: 
Message-ID: <87fz08c8j1.fsf@cenderis.demon.co.uk>
Cameron MacKinnon <··········@clearspot.net> writes:

[...]

> Much of my enthusiasm for Lisp is because it eliminates entire
> classes of bugs which are endemic in virtually all software written
> in more popular languages. As well, Lisp's wonderful malleability
> means that those who need different semantics (such as our OP) can
> easily adapt it to their needs.

Would a default of arbitrary precision rational arithmetic really be
better?

Many algorithms require the approximate arithmetic provided by
floating point: with exact arithmetic many iterative methods would
converge but never terminate (or more likely would become slower and
slower before terminating).

And presumably finite precision rationals wouldn't be better.

I suspect a default of rational arithmetic would simply confuse people
differently, not less.

> But in the area of floating point, I would argue that Lisp is just
> as broken as most other languages. For the average computer
> programmer, floating point is an accident waiting to happen, not the
> ideal default.  Nor can it be said that Common Lisp is the best at
> supporting this format, which admittedly has its uses. Where's
> directed rounding mode support, or control over exception handling?

Yes, that's true.  I haven't looked, but I'd hope some implementations
offer extensions providing lexically scoped directed rounding and
things.  There's a potential for offering nicer support than flag
twiddling library functions (which I think C99 and some other
languages provide).
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87vf90gt1o.fsf@nyct.net>
Cameron MacKinnon <··········@clearspot.net> writes:

> There's a lot to be said for backward compatibility, but do you expect
> numeric formats to be frozen in time forever? We've moved from ASCII to
> Unicode, from 16 bit to 64 bit integers, from plain text to complicated
> markup languages (sigh) and from kilobytes to gigabytes of storage.

And we've moved from 32-bit to 128-bit floats. No, we expect Moore's
law's indirect effects to continue as long as Moore's law does, too. 
Note that exact rational math is not a question of size; it's a question
of type.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Pascal Bourguignon
Subject: Re: demonic numbers !
Date: 
Message-ID: <87mzuhc1s7.fsf@thalassa.informatimago.com>
"Christophe Turle" <······@nospam.com> writes:

> "Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
> ··············@nyct.net...
> > "Christophe Turle" <······@nospam.com> writes:
> >
> >> No, what i really want is a ->rational at the read time and without 
> >> reader
> >> macros since i don't control inputs :(
> >
> > You _must_ _not_ _use_ _the_ _lisp_ _reader_, _then_, _because_ _you_
> > _are_ _not_ _reading_ _lisp_.
> >
> > To make it clear: The lisp reader reads lisp code. What you want to read
> > is not lisp code. Therefore, you do not want to use the lisp reader.
> 
> 
> Yes, i know it's not lisp code. And this is in fact my problem, why IN lisp, 
> .1111 is read as a float ? yes, the spec is the spec. But for this point, 
> imho the spec is not good.

Because it's read this way by all the other languages, therefore it's
better to keep reading it this way to be able to interchange floating
point data.
 
> I can't satisfy myself thinking that in Lisp .1111 does not mean .1111 (from 
> human readers point of view)

There's a notation in lisp if you really want rationnals: 1111/10000


So we get the best of both: compatible with all the data you can find
everywhere, and still possible to read rationals.


> It's the same as telling you that in language Horrible1 HGF.FDS means .1111 
> because internally the language uses compression algorithms so it is more 
> efficient. Where is abstraction ?

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
I need a new toy.
Tail of black dog keeps good time.
Pounce! Good dog! Good dog!
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <42063f2a$0$946$626a14ce@news.free.fr>
"Pascal Bourguignon" <····@mouse-potato.com> a �crit dans le message de 
news: ··············@thalassa.informatimago.com...
> "Christophe Turle" <······@nospam.com> writes:
>
>> "Rahul Jain" <·····@nyct.net> a �crit dans le message de news:
>> ··············@nyct.net...
>> > "Christophe Turle" <······@nospam.com> writes:
>> >
>> >> No, what i really want is a ->rational at the read time and without
>> >> reader
>> >> macros since i don't control inputs :(
>> >
>> > You _must_ _not_ _use_ _the_ _lisp_ _reader_, _then_, _because_ _you_
>> > _are_ _not_ _reading_ _lisp_.
>> >
>> > To make it clear: The lisp reader reads lisp code. What you want to 
>> > read
>> > is not lisp code. Therefore, you do not want to use the lisp reader.
>>
>>
>> Yes, i know it's not lisp code. And this is in fact my problem, why IN 
>> lisp,
>> .1111 is read as a float ? yes, the spec is the spec. But for this point,
>> imho the spec is not good.
>
> Because it's read this way by all the other languages,
> therefore it's
> better to keep reading it this way to be able to interchange floating
> point data.

as said in an other post :

(let ((*read-rational-as* :float))
  (load "my-data") )

It is not because other languages have bad design that lisp must do the 
same.

>> I can't satisfy myself thinking that in Lisp .1111 does not mean .1111 
>> (from
>> human readers point of view)
>
> There's a notation in lisp if you really want rationnals: 1111/10000

Humans write/read 0.1111 not 1111/10000. Of course you may add EACH time a 
layer to convert "0.1111" to "1111/10000" but what a burden ! don't even 
think about using the REPL :(

You express numbers as rational and MAY implement them with floats.

> So we get the best of both: compatible with all the data you can find
> everywhere, and still possible to read rationals.

CL-USER> (read-from-string "0.11111111101111111")
0.11111111

Are you sure this is the best ? sure that this never leads to errors in app 
? sure that if showed to humans, the majority will say "that's fine" ?

Floats should be kept for specialists/compilers ... and surely not as a 
default language parsing behavior.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87wttgjtku.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> Humans write/read 0.1111 not 1111/10000.

Wrong. Blatantly wrong. Otherwise that syntax for rationals would have
never been chosen.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <420d41ba$0$531$636a15ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> "Christophe Turle" <······@nospam.com> writes:
>
>> Humans write/read 0.1111 not 1111/10000.
>
> Wrong. Blatantly wrong. Otherwise that syntax for rationals would have
> never been chosen.
>


for numbers like 1/3

-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87sm44jtin.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> Floats should be kept for specialists/compilers ... and surely not as a 
> default language parsing behavior.

Are you saying that non-specialists and non-compilers read code on a
regular basis?

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: Christophe Turle
Subject: Re: demonic numbers !
Date: 
Message-ID: <420d436b$0$541$636a15ce@news.free.fr>
"Rahul Jain" <·····@nyct.net> a �crit dans le message de news: 
··············@nyct.net...
> "Christophe Turle" <······@nospam.com> writes:
>
>> Floats should be kept for specialists/compilers ... and surely not as a
>> default language parsing behavior.
>
> Are you saying that non-specialists and non-compilers read code on a
> regular basis?
>


Yes. I'm a non FP specialist and non-compiler and i'm reading code every 
days. I don't want to bother with these low-level aspects.

But these aspects are very important for optimization purposes, so we have 
to keep them when needed. But the most hidden possible.

I just want to use the good safety/optimization declaration for example.

And if you want compatibility, just have a way to say it simply.


-- 
___________________________________________________________
Christophe Turle.
sava preview http://perso.wanadoo.fr/turle/lisp/sava.html
(format nil ···@~a.~a" 'c.turle 'wanadoo 'fr) 
From: Rahul Jain
Subject: Re: demonic numbers !
Date: 
Message-ID: <87vf8mbnc8.fsf@nyct.net>
"Christophe Turle" <······@nospam.com> writes:

> Yes. I'm a non FP specialist and non-compiler and i'm reading code every 
> days. I don't want to bother with these low-level aspects.

Non-programming specialists don't mess around with code effectively.
Non-FP specialists don't mess around with FP effectively.

If you don't want something, don't use it. If you want a syntax for
expressing rationals, the only reliable choice is
<numerator>/<denominator> unless you want to teach people about other
bases and how to express fractions in those bases. E.g. 0.11 in base 3
being 4/9 in base 10.

-- 
Rahul Jain
·····@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
From: ···············@yahoo.com
Subject: Re: demonic numbers !
Date: 
Message-ID: <1107792886.953978.26540@o13g2000cwo.googlegroups.com>
I stand corrected.  102005096/1000000 is wrong, because you want
1020051/10000

The issue has nothing to do with Lisp.  You'd encounter the same issue
in C, Basic, or any language that uses floating-point arithmetic.  The
definition of floating-point is to use the number in the computer's
binary representation that is closest to the real number whose name
you've typed.  When you type 102.0051, the computer willfully
misunderstands you to mean some rational number with a power of 2.

If you want a power of 10 rather than a power of 2, then you're doing
"input formatting", not floating-point arithmetic.  Nothing wrong with
"input formatting", we just have to define our terms.

I would write an input formatter, using string functions like "number
of characters after the decimal point".  Whether to do is as a
function, macro, or reader macro is something I care about less.
From: Alan Crowe
Subject: Re: demonic numbers !
Date: 
Message-ID: <86r7jseasy.fsf@cawtech.freeserve.co.uk>
Christophe Turle answered mmcconnell:
>> In your application, do you know for sure that the decimals are exact?
>> That is, if your file has 1.3333, are you sure it's 13333/10000 and not
>> 4/3 ?
>
>yes i am.

William Bland has already given the best answer with his
read macros. 

Nevertheless, this hack might not be too crude.
If you know that the decimals in your input are restricted
to being a modest number of digits then RATIONAL and
RATIONALIZE both come close to solving the problem.

RATIONAL fails because the denominator is always a power of
2.

RATIONALIZE fails because the denominator is unconstrained.

Writing ones own version that constrains the denominator to
be a power of ten will serve for numbers of modest
precision.

            (defun close-to-integer (number tolerance-factor)
	      (multiple-value-bind (integer discrepancy)
				   (round number)
		(if (< (abs discrepancy)
		       (* tolerance-factor single-float-epsilon))
		    integer
		  nil)))

             (defun rat-ten (float)
	       (do ((number float (* 10 number))
		    (tolerance 2 (* 10 tolerance))
		    (numerator 1 (* 10 numerator)))
		   ((close-to-integer number tolerance) (/ (round number)
							    numerator))))

CL-USER(77): (- 1.001 1) => 0.0010000467
CL-USER(78): (- (rat-ten 1.001) 1) => 1/1000

This will serve for perhaps 14 digits if you set
*read-default-float-format* to double-float and use the
double-float-epsilon

(- 1.00000000000001 1) => 9.992007221626409e-15
(- (rat-ten 1.00000000000001) 1) => 1/100000000000000

Alan Crowe
Edinburgh
Scotland