In article <·····················@news.demon.co.uk>, Adrian Hey
<····@NoSpicedHam.iee.org> wrote:
> Erann Gat wrote:
>
> > After all, it is not unreasonable to expect that all
> > the work required just to get the program to compile so you could run it
> > for the first time should pay a dividend of some sort,
>
> I'm still baffled by what this extra work that you (and a few others like
> Raffael and Pascal) keep refering to actually is. So maybe you could
> illustrate what extra work your talking about if we chose a specific
> example we are both familiar with, the phonecodes problem. I posted
> a Haskell solution which you can dig out of the google if you like.
> How do you think the presence of a static type system impeded development
> of this program?
I can't really speak to this because I don't know Haskell. But the
episode in question involved C++. And in the case of ML the extra work
involves things like changing all you arithmetic operations if you decide
to change an int to a float.
> The only extra work I can think of is that a static type system does
> force you to give some thought to correctness as you write your
> code.
No, it only forces you to give some thought to a particular kind of
correctness, which may or may not be a kind that actually matters to you.
That's the whole point.
E.
Erann Gat wrote:
>
> I can't really speak to this because I don't know Haskell. But the
> episode in question involved C++. And in the case of ML the extra work
> involves things like changing all you arithmetic operations if you decide
> to change an int to a float.
Not in Standard ML.
- Andreas
--
Andreas Rossberg, ········@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
<posted & mailed>
Andreas Rossberg wrote:
> Erann Gat wrote:
>>
>> I can't really speak to this because I don't know Haskell. But the
>> episode in question involved C++. And in the case of ML the extra work
>> involves things like changing all you arithmetic operations if you decide
>> to change an int to a float.
>
> Not in Standard ML.
Not in Haskell either
(not that I think that's necessarily a good thing :-).
Regards
--
Adrian Hey
Erann Gat wrote:
>
> I can't really speak to this because I don't know Haskell. But the
> episode in question involved C++. And in the case of ML the extra
> work involves things like changing all you arithmetic operations if
> you decide to change an int to a float.
That's of of the least common types of change that I have met in my life
(with the exception of school examples where this can indeed happen).
The type of changes that I usually see are a change in the fields of a
data structure, or replacing a data structure with a slightly different
one. Static typing will not require me to rewrite all the code where the
semantics has remained unchanged. (Static typing without inference will
require me to needlessly rewrite the routine signatures such as
"f (foo: BAR)" if the type of foo changes to BAZ, and f just passes the
value through to other functions without actually inspecting it. Which
is what I see in type inference - I've been spending too many hours
twiddling function signatures.)
Regards,
Jo
Hi Erann Gat,
> I can't really speak to this because I don't know Haskell. But the
> episode in question involved C++. And in the case of ML the extra work
> involves things like changing all you arithmetic operations if you
> decide to change an int to a float.
This lack of explicitness is a weakness in Common Lisp and a potential
source of inaccuracy. It arises whenever integer to float conversion must
be more precise than single floating point arithmetic and it can be really
hard to spot the inaccuracy within a series of computations.
Specifically the problem is with automatic single- to higher-float
contagion which the rules make difficult to spot.
(values #1=(loop for i in '(2 3 5 7 11) sum (* pi (sqrt i))) (type-of #1#))
=> 35.640452700666685
double-float
Here I've been lulled into the false belief that calculations have been
performed to double-floating point accuracy. But SQRT returned single
floats. It is PI that lead to the double float contagion.
Here's the correct level of accuracy provided by a double-float:
(values #1=(loop for i in '(2d0 3d0 5d0 7d0 11d0) sum (* pi (sqrt i))) (type-of #1#))
=> 35.64045272006214
double-float
Here's the more realistic level of accuracy that was computed in the
original example:
(values #1=(loop for i in '(2 3 5 7 11) sum (* (coerce pi 'single-float)
(sqrt i))) (type-of #1#))
=> 35.640453f0
single-float
Hidden contagion would not be a problem if floats of different formats
lead to contagion to the float of a lower level of accuracy. If a
single-float appeared in a series of double-float calculations the outcome
would be a single-float and the fix would be to locate the source of the
contagion. As it stands a programmer may be none the wiser when half the
precision is thrown away at some point--or potentially worse if the
implementation supports even higher levels of floating point precision:
(loop for i in '(2 3 5 7 11) sum (* pi (sqrt i)))
=> 35.64045270066668789L0 in CLISP in ANSI Common Lisp mode. CLISP warns:
WARNING:
Floating point operation combines numbers of different precision. See ANSI
CL 12.1.4.4 and the CLISP impnotes for details. The result's actual
precision is controlled by *floating-point-contagion-ansi*.
To shut off this warning, set *warn-on-floating-point-contagion* to nil.
Regards,
Adam
In article <······························@consulting.net.nz>, Adam Warner
<······@consulting.net.nz> wrote:
> Hi Erann Gat,
>
> > I can't really speak to this because I don't know Haskell. But the
> > episode in question involved C++. And in the case of ML the extra work
> > involves things like changing all you arithmetic operations if you
> > decide to change an int to a float.
>
> This lack of explicitness is a weakness in Common Lisp and a potential
> source of inaccuracy. It arises whenever integer to float conversion must
> be more precise than single floating point arithmetic and it can be really
> hard to spot the inaccuracy within a series of computations.
>
> Specifically the problem is with automatic single- to higher-float
> contagion which the rules make difficult to spot.
The solution to this is to add a (optional) warning to signal when this
happens, not to require the user to write single-float-add and
double-float-add and integer-add and rational-add and
single-float-multiply and double-float-multiply and integer-multiply and
rational-multiply and single-float-complex-add and
double-float-complex-add etc. etc. etc. etc. instead of + and *.
E.
···@jpl.nasa.gov (Erann Gat) writes:
>Adam Warner <······@consulting.net.nz> wrote:
>
>> Hi Erann Gat,
>>
>> > I can't really speak to this because I don't know Haskell. But the
>> > episode in question involved C++. And in the case of ML the extra work
>> > involves things like changing all you arithmetic operations if you
>> > decide to change an int to a float.
>>
>> This lack of explicitness is a weakness in Common Lisp and a potential
>> source of inaccuracy. It arises whenever integer to float conversion must
>> be more precise than single floating point arithmetic and it can be really
>> hard to spot the inaccuracy within a series of computations.
>>
>> Specifically the problem is with automatic single- to higher-float
>> contagion which the rules make difficult to spot.
>
>The solution to this is to add a (optional) warning to signal when this
>happens, not to require the user to write single-float-add and
>double-float-add and integer-add and rational-add and
>single-float-multiply and double-float-multiply and integer-multiply and
>rational-multiply and single-float-complex-add and
>double-float-complex-add etc. etc. etc. etc. instead of + and *.
Certainly we don't want to add all those different variants.
But that doesn't mean that an optional warning is the solution.
If you add an optional warning, then you also need some syntax
to turn the warning off in particular instances, otherwise the
real warnings will get lost in the noise. But languages like
Haskell already have a solution to that: use an explicit conversion.
--
Fergus Henderson <···@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
On Wed, 19 Nov 2003 00:26:07 +1300, Adam Warner
<······@consulting.net.nz> wrote:
>This lack of explicitness is a weakness in Common Lisp and a potential
>source of inaccuracy. It arises whenever integer to float conversion must
>be more precise than single floating point arithmetic and it can be really
>hard to spot the inaccuracy within a series of computations.
The defect in Common Lisp here isn't in lack of explictness...
>Specifically the problem is with automatic single- to higher-float
>contagion which the rules make difficult to spot.
>
>(values #1=(loop for i in '(2 3 5 7 11) sum (* pi (sqrt i))) (type-of #1#))
>=> 35.640452700666685
> double-float
>
>Here I've been lulled into the false belief that calculations have been
>performed to double-floating point accuracy. But SQRT returned single
>floats.
That's the defect. SQRT et al should do everything in double precision
unless specifically given a single precision argument. Might be too
late to fix the language specification now, but if it was going to be
changed, that's where the change should be made.
--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
················@eircom.net (Russell Wallace) writes:
> That's the defect. SQRT et al should do everything in double precision
> unless specifically given a single precision argument. Might be too
> late to fix the language specification now, but if it was going to be
> changed, that's where the change should be made.
AOL! (I was going to post the same comment)
Btw.: Which lisps are actually using low-precision floats? I know
of openmcl (which will return nil if asked whether (= (- 1.3 3/10) 1)),
but the lisp I'm using most these days, LispWorks, will use 53 bit
precision floats everywhere (I think. At least in the linux version).
--
(espen)
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> ················@eircom.net (Russell Wallace) writes:
>
> > That's the defect. SQRT et al should do everything in double precision
> > unless specifically given a single precision argument. Might be too
> > late to fix the language specification now, but if it was going to be
> > changed, that's where the change should be made.
>
> AOL! (I was going to post the same comment)
>
> Btw.: Which lisps are actually using low-precision floats? I know
> of openmcl (which will return nil if asked whether (= (- 1.3 3/10) 1)),
> but the lisp I'm using most these days, LispWorks, will use 53 bit
> precision floats everywhere (I think. At least in the linux version).
That's not a low-precision problem. That's a finite precision problem!
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> That's not a low-precision problem. That's a finite precision problem!
You're missing the point: In this subthread we were talking about a
completeley different matter, namely that the standard specifies coercion
to single-floats in mixed arithmetical computations.
--
(espen)
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>
> > That's not a low-precision problem. That's a finite precision problem!
>
> You're missing the point: In this subthread we were talking about a
> completeley different matter, namely that the standard specifies coercion
> to single-floats in mixed arithmetical computations.
Yep, but even if the standard specified double-floats or
extra-large-floats, or if your implementation has 53-bit floats, you
can still write decimal numbers that can't be represented exactly in
floating-point and you can still have expressions like (= (- 1.3 3/10)
1) returning NIL.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> Yep, but even if the standard specified double-floats or
> extra-large-floats, or if your implementation has 53-bit floats, you
> can still write decimal numbers that can't be represented exactly in
> floating-point and you can still have expressions like (= (- 1.3 3/10)
> 1) returning NIL.
Yes, in fact we should all use 1-bit floats since floats can never be
precise enough anyway.
(or do you actually agree with the concern of the subthread, that the
standard's specification of coercion to single-float may be
unfortunate?)
--
(espen)
Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>
> > Yep, but even if the standard specified double-floats or
> > extra-large-floats, or if your implementation has 53-bit floats, you
> > can still write decimal numbers that can't be represented exactly in
> > floating-point and you can still have expressions like (= (- 1.3 3/10)
> > 1) returning NIL.
>
> Yes, in fact we should all use 1-bit floats since floats can never be
> precise enough anyway.
As a way to avoid false expectations, yes.
> (or do you actually agree with the concern of the subthread, that the
> standard's specification of coercion to single-float may be
> unfortunate?)
I agree that it's unfortunate that the standard's specifies to coerce
to floating points. You'd get the same problem with big(ger)nums and
double-floats.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> > Yes, in fact we should all use 1-bit floats since floats can never be
> > precise enough anyway.
>
> As a way to avoid false expectations, yes.
We can dispense of computers entirely. That would be more effective.
> > (or do you actually agree with the concern of the subthread, that the
> > standard's specification of coercion to single-float may be
> > unfortunate?)
>
> I agree that it's unfortunate that the standard's specifies to coerce
> to floating points. You'd get the same problem with big(ger)nums and
> double-floats.
While it does not look too good, it was certainly a very wise
decission. The alternatives, i.e to transform floats into rationals
when doing mixed arithmetic, are much worse.
While it is cute to be able to get
(= (- 1.3 3/10) 1) ==> T,
I find the idea of getting a rational from
(* 1/2 pi)
not very appealing.
The other issue is that, in this mode, if you are doing lots of
calculations and one single rational leaks in, you are set for sudden
heap overflow.
I would have liked it more, however, if the default precission for
everything was not single float, but double float, or rather the
largest fixed precission float available.
Mario S. Mommer <········@yahoo.com> writes:
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> > Espen Vestre <·····@*do-not-spam-me*.vestre.net> writes:
> > > Yes, in fact we should all use 1-bit floats since floats can never be
> > > precise enough anyway.
> >
> > As a way to avoid false expectations, yes.
>
> We can dispense of computers entirely. That would be more effective.
>
> > > (or do you actually agree with the concern of the subthread, that the
> > > standard's specification of coercion to single-float may be
> > > unfortunate?)
> >
> > I agree that it's unfortunate that the standard's specifies to coerce
> > to floating points. You'd get the same problem with big(ger)nums and
> > double-floats.
>
> While it does not look too good, it was certainly a very wise
> decission. The alternatives, i.e to transform floats into rationals
> when doing mixed arithmetic, are much worse.
>
> While it is cute to be able to get
>
> (= (- 1.3 3/10) 1) ==> T,
>
> I find the idea of getting a rational from
>
> (* 1/2 pi)
>
> not very appealing.
It should not be called pi, but: pi+/-epsilon
> The other issue is that, in this mode, if you are doing lots of
> calculations and one single rational leaks in, you are set for sudden
> heap overflow.
>
> I would have liked it more, however, if the default precission for
> everything was not single float, but double float, or rather the
> largest fixed precission float available.
I think that the way clisp handle it is nice, with a:
CUSTOM:*FLOATING-POINT-CONTAGION-ANSI* flag specifying if you want the
pragmatism of COMMON-LISP or the mathematical honesty implemented in
rounding to shortest (less precise) types.
Otherwise, we should not merge floating points with non floating
points, and = should not be defined on floating point numbers.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
>>>>> "Pascal" == Pascal Bourguignon <····@thalassa.informatimago.com> writes:
Pascal> I think that the way clisp handle it is nice, with a:
Pascal> CUSTOM:*FLOATING-POINT-CONTAGION-ANSI* flag specifying if you want the
Pascal> pragmatism of COMMON-LISP or the mathematical honesty implemented in
Pascal> rounding to shortest (less precise) types.
This has always annoyed me about clisp. Either go all the way and do
intervals or don't do them at all. Not this kind of, sort of, half
way, interval-like thing that pretends to know more than I do about
the numbers I'm computing. Well, it probably does know better, but
it can't always. :-)
Ray
Mario S. Mommer wrote:
> While it is cute to be able to get
>
> (= (- 1.3 3/10) 1) ==> T,
This would not happen even if floats were converted to ratios for
mixed-mode arithmetic, because the numerical value of a floating
point 1.3 is not 13/10. Hence 1.3 is more likely to be converted
to 5854679515581645/4503599627370496 than to 13/10.
Will
··········@verizon.net (William D Clinger) writes:
> Mario S. Mommer wrote:
> > While it is cute to be able to get
> >
> > (= (- 1.3 3/10) 1) ==> T,
>
> This would not happen even if floats were converted to ratios for
> mixed-mode arithmetic, because the numerical value of a floating
> point 1.3 is not 13/10. Hence 1.3 is more likely to be converted
> to 5854679515581645/4503599627370496 than to 13/10.
You're right. But what if instead of trying to convert floats to
rationals (which would be meaningless as you said), we tried to
convert "1.3" to a rational? That is, do it in the reader!
(defun parse-decimal (string)
(do* ((dot (position (character ".") string))
(after (+ 2 dot) (1+ after)))
((or (<= (length string) after)
(not (digit-char-p (char string after) *read-base*)))
(values (read-from-string (format nil "~A~A/1~V,'0D"
(subseq string 0 dot)
(subseq string (1+ dot) after)
(- after dot 1) 0))
after))))
(parse-decimal "1.3")
13/10 ;
3
(parse-decimal "1.33333")
133333/100000 ;
7
(parse-decimal "1.32")
33/25 ;
4
--
__Pascal_Bourguignon__
http://www.informatimago.com/