From: Max Hailperin
Subject: IEEE FP NaNs = everything else?
Date: 
Message-ID: <1990Mar15.211150.19338@Neon.Stanford.EDU>
Has anyone explored the possibility of useing IEEE floating-point as a
general representation in a manifestly-typed language, with everything
other than flonums being NaNs (Not-A-Numbers)?  On the surface, this
seems both attractive and ridiculous.  If I had to make a guess, I'd
guess that the former only took precedence over the latter for serious
crunching on specialized 64-bit machines.

But, the question is, can anyone do better than my 2-minute idle speculation?
Thanks.

From: Barry Margolin
Subject: Re: IEEE FP NaNs = everything else?
Date: 
Message-ID: <34725@news.Think.COM>
In article <······················@Neon.Stanford.EDU> ···@Neon.Stanford.EDU (Max Hailperin) writes:
>Has anyone explored the possibility of useing IEEE floating-point as a
>general representation in a manifestly-typed language, with everything
>other than flonums being NaNs (Not-A-Numbers)?

The IEEE rule is that any arithmetic involving NaNs must result in a NaN.
But if fixnums are implemented as NaNs then this means that (+ 3.0 1) must
evaluate to a NaN rather than 4.0, since this would be (+ 3.0 NaN).

Perhaps, instead of using NaNs for all non-fixnums you should use NaNs for
all non-numbers (NaN *does* stand for Not a Number, so this makes sense).
You could then use signalling NaNs, which would cause ordinary IEEE FP
hardware to trap on things like (+ 3.0 'a).

--
Barry Margolin, Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: Max Hailperin
Subject: Re: IEEE FP NaNs = everything else?
Date: 
Message-ID: <1990Mar16.161849.18850@Neon.Stanford.EDU>
In article <·····@news.Think.COM> ······@nugodot.think.com.UUCP (Barry Margolin) writes:
>In article <······················@Neon.Stanford.EDU> ···@Neon.Stanford.EDU (Max Hailperin) writes:
>>Has anyone explored the possibility of useing IEEE floating-point as a
>>general representation in a manifestly-typed language, with everything
>>other than flonums being NaNs (Not-A-Numbers)?
>
>The IEEE rule is that any arithmetic involving NaNs must result in a NaN.

That's for quite NaNs -- I had signalling NaNs in mind, in which case you could
take a trap and do whatever you felt like.

>But if fixnums are implemented as NaNs then this means that (+ 3.0 1) must
>evaluate to a NaN rather than 4.0, since this would be (+ 3.0 NaN).

See above.

>Perhaps, instead of using NaNs for all non-fixnums you should use NaNs for
                                            ^^^ I assume you meant flo
>all non-numbers (NaN *does* stand for Not a Number, so this makes sense).

If we just used the IEEE flonum representation for all numbers, even integers,
than that would sort of make sense [though there are serious problems], but
then we would still be using NaNs for all non-flonums, because non-numbers and
non-flonums would be one and the same.

>You could then use signalling NaNs, which would cause ordinary IEEE FP
>hardware to trap on things like (+ 3.0 'a).

As I said up front, why not use this for (+ 3.0 1) as well?