From: Peter Seibel
Subject: float-radix?
Date: 
Message-ID: <m3brii5uqf.fsf@javamonkey.com>
Is the value returned by float-radix solely determined by the type of
float (single, double, etc.)? Or can different values within a type
have different radices? I gather that it's only the type that matters
but I couldn't find anywhere that said so for sure. (May have missed
something.)

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp

From: Nate Holloway
Subject: Re: float-radix?
Date: 
Message-ID: <189890ca.0407151410.4dd89a06@posting.google.com>
Peter Seibel <·····@javamonkey.com> wrote in message news:<··············@javamonkey.com>...
> Is the value returned by float-radix solely determined by the type of
> float (single, double, etc.)? Or can different values within a type
> have different radices? I gather that it's only the type that matters
> but I couldn't find anywhere that said so for sure. (May have missed
> something.)

For IEEE 754 format numbers, the radix is always 2. Nearly all other formats
in common use also use a radix of 2, except for the IBM S/360 (and
its descendants), which use a radix of 16.

You can think of the normalization step of the floating-point implementation
as shifting the bits of the number so that the most-significant 1 bit is
located in the highest digit of the significand. In a radix-2 (binary)
system, this would be the high bit, except for systems that use hidden-bit
representation, this high bit is not actually stored, although its conceptual
position is "just to the left" of the highest stored significand bit.
In a radix-16 (hexadecimal) implementation, the highest digit is a nibble.
It's obvious that no hidden-bit representation is possible this way, since
the number can only be "shifted" in units of a nibble during normalization.

Why use radix-16? It yields a greater range with the same size exponent,
with some possible loss of precision compared to radix-2. It also could
simplify the hardware, as numbers are always shifted in units of 4 bits.
Not coincidentally, this is also the size of BCD digits, so the same hardware
can be used for both.

Is there even a Common Lisp implementation for, e.g., the ES/390?
If not, this question would never occur in practice.

#1=#:nate
--
From: Christopher C. Stacy
Subject: Re: float-radix?
Date: 
Message-ID: <uvfgovrrz.fsf@news.dtpq.com>
>>>>> On 15 Jul 2004 15:10:09 -0700, Nate Holloway ("Nate") writes:

 Nate> Is there even a Common Lisp implementation for, e.g., the ES/390?
 Nate> If not, this question would never occur in practice.

By which you mean "current practice, until someone does bring up
the question by implementing it for such a computer".
From: Duane Rettig
Subject: Re: float-radix?
Date: 
Message-ID: <4llhkki85.fsf@franz.com>
······@news.dtpq.com (Christopher C. Stacy) writes:

> >>>>> On 15 Jul 2004 15:10:09 -0700, Nate Holloway ("Nate") writes:
> 
>  Nate> Is there even a Common Lisp implementation for, e.g., the ES/390?
>  Nate> If not, this question would never occur in practice.

Actually, we did, way back in 1986-1988 timeframe.  It was on the
370-series hardware, and I ported both Franz Lisp and Allegro CL
to it (it was my first port of Franz Lisp, and my second (after the
Cray) of Allegro CL. The float-radix issue was painful, because
we truly had to rely on "closeness" and epsilon arithmetic when
testing, because the fact that the bits didn't come out exactly the
same on two different architectures didn't mean that the answers
were "wrong".

> By which you mean "current practice, until someone does bring up
> the question by implementing it for such a computer".

Well, it is true that it had come up, and it is also true that
our sources tend to have most of what would be necessary to bring
back a re-port to that architecture.  However, just as we have
parameterizations in our sources for word-addressing vs byte-addressing
(from the Cray ports), and float-radix is also parameterized, I
would not at all be surprised if some assumptions haven't crept
in over the years that would have to be debugged if we were to
go back to either architecture.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Nate Holloway
Subject: Re: float-radix?
Date: 
Message-ID: <189890ca.0407151948.62a7f1b7@posting.google.com>
······@news.dtpq.com (Christopher C. Stacy) wrote in message news:<·············@news.dtpq.com>...
> By which you mean "current practice, until someone does bring up
> the question by implementing it for such a computer".

I thought about it some more, and it's actually even worse. A bigfloat
implementation could be written on any architecture that, for performance
reasons, has a radix of 256. Some architectures, like the x86 and the
PowerPC, can align data on byte boundaries automatically. So it would
be advantageous to use a bigfloat representation where the digits are
8 bits in size.

Fortunately, Common Lisp requires that all FLOAT types have a most-
positive and most-negative value, which a bigfloat type with unlimited
range would not be able to define.  So it seems like bigfloats are
required to be implemented at a level above the language itself.

All of this does not necessarily exclude extended-precision float
representations which have a fixed exponent field and are not,
therefore, bigfloats; such a format could of course in theory use
16 or 256 for its radix, but I have not heard of this ever being done.

'#1=#:nate
--
From: Nate Holloway
Subject: Re: float-radix?
Date: 
Message-ID: <189890ca.0407151949.2710d2b1@posting.google.com>
······@news.dtpq.com (Christopher C. Stacy) wrote in message news:<·············@news.dtpq.com>...
> By which you mean "current practice, until someone does bring up
> the question by implementing it for such a computer".

I thought about it some more, and it's actually even worse. A bigfloat
implementation could be written on any architecture that, for performance
reasons, has a radix of 256. Some architectures, like the x86 and the
PowerPC, can align data on byte boundaries automatically. So it would
be advantageous to use a bigfloat representation where the digits are
8 bits in size.

Fortunately, Common Lisp requires that all FLOAT types have a most-
positive and most-negative value, which a bigfloat type with unlimited
range would not be able to define.  So it seems like bigfloats are
required to be implemented at a level above the language itself.

All of this does not necessarily exclude extended-precision float
representations which have a fixed exponent field and are not,
therefore, bigfloats; such a format could of course in theory use
16 or 256 for its radix, but I have not heard of this ever being done.

'#1=#:nate
--
From: Peter Seibel
Subject: Re: float-radix?
Date: 
Message-ID: <m3oemgu559.fsf@javamonkey.com>
····@crackaddict.com (Nate Holloway) writes:

> Peter Seibel <·····@javamonkey.com> wrote in message news:<··············@javamonkey.com>...
>> Is the value returned by float-radix solely determined by the type of
>> float (single, double, etc.)? Or can different values within a type
>> have different radices? I gather that it's only the type that matters
>> but I couldn't find anywhere that said so for sure. (May have missed
>> something.)
>
> For IEEE 754 format numbers, the radix is always 2. Nearly all other
> formats in common use also use a radix of 2, except for the IBM
> S/360 (and its descendants), which use a radix of 16.

So it sounds like the value returned by FLOAT-RADIX is likely
determined by the hardware or at the very least some implementation
choice. So does anyone know why FLOAT-RADIX was defined as a function
that takes an argument rather than as a constant along the lines of
LEAST-POSITIVE-DOUBLE-FLOAT? (I.e. because FLOAT-RADIX is a function
that takes an argument I expected that there would be some situation
where it would matter what argument I passed it. But it sounds like
not.)

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Duane Rettig
Subject: Re: float-radix?
Date: 
Message-ID: <4hds8jywf.fsf@franz.com>
Peter Seibel <·····@javamonkey.com> writes:

> ····@crackaddict.com (Nate Holloway) writes:
> 
> > Peter Seibel <·····@javamonkey.com> wrote in message news:<··············@javamonkey.com>...
> >> Is the value returned by float-radix solely determined by the type of
> >> float (single, double, etc.)? Or can different values within a type
> >> have different radices? I gather that it's only the type that matters
> >> but I couldn't find anywhere that said so for sure. (May have missed
> >> something.)
> >
> > For IEEE 754 format numbers, the radix is always 2. Nearly all other
> > formats in common use also use a radix of 2, except for the IBM
> > S/360 (and its descendants), which use a radix of 16.
> 
> So it sounds like the value returned by FLOAT-RADIX is likely
> determined by the hardware or at the very least some implementation
> choice.

Definitely the hardware; if it is an implementation choice, it
would be an emulation of harware, real or imaginary (like a
bigfloat package, as another poster has mentioned).

 So does anyone know why FLOAT-RADIX was defined as a function
> that takes an argument rather than as a constant along the lines of
> LEAST-POSITIVE-DOUBLE-FLOAT? (I.e. because FLOAT-RADIX is a function
> that takes an argument I expected that there would be some situation
> where it would matter what argument I passed it. But it sounds like
> not.)

Consider the example of the (DEC) Alpha, which has two different
floating point representations in hardware; the S and T formats,
which are IEEE-754 compatible, and the F and G formats, VAX
compatible.  Now, I think that the float-radix is the same on
these (can't say for sure, because I don't know the Vax formats),
but imagine a parallel universe in which IBM felt pressure to
become IEEE-754 compatible with their mainframe architecture, and
they added a whole IEEE-754 hardware module.  There might be some
issue as to what could consitiute short, single, double, amd long
float in ANSI CL, but conceivably an implementation could have
defined different float formats for the four different CL float
formats.  So in that case, float-radix would not be a constant,
but would need to know which float you were asking about.

Or, in the case of a bigfloat with a large float radix (presumably
mapped into the long-float type) the float-radix would also be tied
to the actual float of that type.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Lars Brinkhoff
Subject: Re: float-radix?
Date: 
Message-ID: <857jt16thl.fsf@junk.nocrew.org>
Duane Rettig <·····@franz.com> writes:
> imagine a parallel universe in which IBM felt pressure to become
> IEEE-754 compatible with their mainframe architecture, and they
> added a whole IEEE-754 hardware module.

Greetings! This is a message from a parallel universe.
http://www.research.ibm.com/journal/rd/435/schwarz.html
From: Duane Rettig
Subject: Re: float-radix?
Date: 
Message-ID: <4brictse4.fsf@franz.com>
Lars Brinkhoff <·········@nocrew.org> writes:

> Duane Rettig <·····@franz.com> writes:
> > imagine a parallel universe in which IBM felt pressure to become
> > IEEE-754 compatible with their mainframe architecture, and they
> > added a whole IEEE-754 hardware module.
> 
> Greetings! This is a message from a parallel universe.
> http://www.research.ibm.com/journal/rd/435/schwarz.html

What a refreshingly tactful way of telling me I'm full of it!
In my defense, I can only say that I haven't had any reason
to look at 360-and-up architectures since slightly after I left
Amdahl in 1987 - the only big-blue architecture I've been working
with recently has been the Power series (which already have the
IEEE-754 fpu).  I guess I should have known that IBM would have
seen the light.  Sure took them long enough, though...

Any relationship in the G4/G5 names to the Power architectures?
Or are they really fudging by making their new "360"s out of
the Power architecture?

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Lars Brinkhoff
Subject: Re: float-radix?
Date: 
Message-ID: <85pt6r6gvb.fsf@junk.nocrew.org>
Duane Rettig <·····@franz.com> writes:
> I can only say that I haven't had any reason to look at 360-and-up
> architectures since slightly after I left Amdahl in 1987 - the only
> big-blue architecture I've been working with recently has been the
> Power series (which already have the IEEE-754 fpu).  I guess I
> should have known that IBM would have seen the light.  Sure took
> them long enough, though...

I believe it has something to do with IBM's support of Java.

> Any relationship in the G4/G5 names to the Power architectures?

I'm hardly an expert, but according to a quick web search, the Gn (for
n from 1 to 6 or thereabouts) names apparently denote the generations
of ESA/390 hardware implementations.

An insubstantiated rumour says that Apple's marketing department came
up with G3, G4, and G5 (probably, there were no G1 or G2) as names for
machine models (indicating which PowerPC CPU is used).

> Or are they really fudging by making their new "360"s out of the
> Power architecture?

No.

-- 
Lars Brinkhoff,         Services for Unix, Linux, GCC, HTTP
Brinkhoff Consulting    http://www.brinkhoff.se/
From: Tim Bradshaw
Subject: Re: float-radix?
Date: 
Message-ID: <fbc0f5d1.0407160153.1acc9cde@posting.google.com>
Peter Seibel <·····@javamonkey.com> wrote in message news:<··············@javamonkey.com>...

> 
> So it sounds like the value returned by FLOAT-RADIX is likely
> determined by the hardware or at the very least some implementation
> choice. So does anyone know why FLOAT-RADIX was defined as a function
> that takes an argument rather than as a constant along the lines of
> LEAST-POSITIVE-DOUBLE-FLOAT? (I.e. because FLOAT-RADIX is a function
> that takes an argument I expected that there would be some situation
> where it would matter what argument I passed it. But it sounds like
> not.)
>

Different floating point types could have different radixes (radices?
radishes?) on the same hardware.  SINGLE-FLOAT could be different than
DOUBLE-FLOAT for instance.  And there could (couldn't there? I'm not
sure if the spec allows it) be implementation-defined subtypes such as
SINGLE-FLOAT/BASE-16 and SINGLE-FLOAT/BASE-2 on HW that could support
several different formats.  For instance as far as I can see IBM z
series machines have at least some support for both base 16 and base 2
floats: they at least have operations to convert to and from IEEE
floats, I'm not sure they can actually manipulate the IEEE floats.  In
any case an implementation on a z series machine might well want both
formats even if all it could really do with the base 2 ones was coerce
them to base 16: that way it could read & write arrays of IEEE floats.

--tim

--tim
From: Mario S. Mommer
Subject: Re: float-radix?
Date: 
Message-ID: <fz4qo8tfvx.fsf@germany.igpm.rwth-aachen.de>
··········@tfeb.org (Tim Bradshaw) writes:
> Different floating point types could have different radixes (radices?
> radishes?) on the same hardware.  SINGLE-FLOAT could be different than
> DOUBLE-FLOAT for instance.  And there could (couldn't there? I'm not
> sure if the spec allows it) be implementation-defined subtypes such as
> SINGLE-FLOAT/BASE-16 and SINGLE-FLOAT/BASE-2 on HW that could support
> several different formats.  For instance as far as I can see IBM z
> series machines have at least some support for both base 16 and base 2
> floats: they at least have operations to convert to and from IEEE
> floats, I'm not sure they can actually manipulate the IEEE floats.  In
> any case an implementation on a z series machine might well want both
> formats even if all it could really do with the base 2 ones was coerce
> them to base 16: that way it could read & write arrays of IEEE floats.

Might well be, might all well be.

I've read somewhere (I can't find it at the moment, and I'm to busy to
work it out by myself - sorry... I might as well be wrong :-| ) that
radix 2 gives you the most precission by the bit. So radix 16 is a
rather bad idea because you get less precission with the same amount
of bits.

The only alternative radix that makes sense, in light of that, seems
to me to be 10, since it is the "correct" radix when you compute with
monetary quantities (because you can represent them exactly). I've
heard that IBM mainframes do that, for instance.
From: Pascal Bourguignon
Subject: Re: float-radix?
Date: 
Message-ID: <87vfgo9kl8.fsf@thalassa.informatimago.com>
Mario S. Mommer <········@yahoo.com> writes:
> The only alternative radix that makes sense, in light of that, seems
> to me to be 10, since it is the "correct" radix when you compute with
> monetary quantities (because you can represent them exactly). I've
> heard that IBM mainframes do that, for instance.

You're not making any sense!  Money is an integer number of cents.
Anything else and you've got accounting discrepancies.  But of course
with "accounting errors" such as worldcom, fannie mae, network
associates, and others, cents don't matter.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

There is no worse tyranny than to force a man to pay for what he does not
want merely because you think it would be good for him. -- Robert Heinlein